00:00:00.001 Started by upstream project "autotest-per-patch" build number 132748 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.230 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.231 The recommended git tool is: git 00:00:00.231 using credential 00000000-0000-0000-0000-000000000002 00:00:00.234 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.255 Fetching changes from the remote Git repository 00:00:00.259 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.282 Using shallow fetch with depth 1 00:00:00.282 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.282 > git --version # timeout=10 00:00:00.305 > git --version # 'git version 2.39.2' 00:00:00.305 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.316 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.316 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.136 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.147 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.159 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.159 > git config core.sparsecheckout # timeout=10 00:00:06.170 > git read-tree -mu HEAD # timeout=10 00:00:06.185 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.206 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.206 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.292 [Pipeline] Start of Pipeline 00:00:06.306 [Pipeline] library 00:00:06.308 Loading library shm_lib@master 00:00:06.308 Library shm_lib@master is cached. Copying from home. 00:00:06.323 [Pipeline] node 00:00:06.332 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:06.333 [Pipeline] { 00:00:06.340 [Pipeline] catchError 00:00:06.341 [Pipeline] { 00:00:06.352 [Pipeline] wrap 00:00:06.359 [Pipeline] { 00:00:06.366 [Pipeline] stage 00:00:06.368 [Pipeline] { (Prologue) 00:00:06.383 [Pipeline] echo 00:00:06.384 Node: VM-host-SM9 00:00:06.390 [Pipeline] cleanWs 00:00:06.398 [WS-CLEANUP] Deleting project workspace... 00:00:06.398 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.411 [WS-CLEANUP] done 00:00:06.598 [Pipeline] setCustomBuildProperty 00:00:06.672 [Pipeline] httpRequest 00:00:07.040 [Pipeline] echo 00:00:07.041 Sorcerer 10.211.164.101 is alive 00:00:07.050 [Pipeline] retry 00:00:07.051 [Pipeline] { 00:00:07.062 [Pipeline] httpRequest 00:00:07.065 HttpMethod: GET 00:00:07.066 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.066 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.078 Response Code: HTTP/1.1 200 OK 00:00:07.078 Success: Status code 200 is in the accepted range: 200,404 00:00:07.079 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.041 [Pipeline] } 00:00:31.057 [Pipeline] // retry 00:00:31.064 [Pipeline] sh 00:00:31.341 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:31.356 [Pipeline] httpRequest 00:00:31.808 [Pipeline] echo 00:00:31.811 Sorcerer 10.211.164.101 is alive 00:00:31.821 [Pipeline] retry 00:00:31.824 [Pipeline] { 00:00:31.838 [Pipeline] httpRequest 00:00:31.843 HttpMethod: GET 00:00:31.843 URL: http://10.211.164.101/packages/spdk_c7acbd6bef98bb47b167bdcbe5efc40128fa190b.tar.gz 00:00:31.844 Sending request to url: http://10.211.164.101/packages/spdk_c7acbd6bef98bb47b167bdcbe5efc40128fa190b.tar.gz 00:00:31.895 Response Code: HTTP/1.1 200 OK 00:00:31.896 Success: Status code 200 is in the accepted range: 200,404 00:00:31.896 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_c7acbd6bef98bb47b167bdcbe5efc40128fa190b.tar.gz 00:01:15.919 [Pipeline] } 00:01:15.939 [Pipeline] // retry 00:01:15.948 [Pipeline] sh 00:01:16.228 + tar --no-same-owner -xf spdk_c7acbd6bef98bb47b167bdcbe5efc40128fa190b.tar.gz 00:01:19.527 [Pipeline] sh 00:01:19.810 + git -C spdk log --oneline -n5 00:01:19.810 c7acbd6be test/iscsi_tgt: Remove support for the namespace arg 00:01:19.810 a5e6ecf28 lib/reduce: Data copy logic in thin read operations 00:01:19.810 a333974e5 nvme/rdma: Flush queued send WRs when disconnecting a qpair 00:01:19.810 2b8672176 nvme/rdma: Prevent submitting new recv WR when disconnecting 00:01:19.810 e2dfdf06c accel/mlx5: Register post_poller handler 00:01:19.851 [Pipeline] writeFile 00:01:19.890 [Pipeline] sh 00:01:20.203 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.216 [Pipeline] sh 00:01:20.498 + cat autorun-spdk.conf 00:01:20.498 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.498 SPDK_TEST_NVMF=1 00:01:20.498 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.498 SPDK_TEST_USDT=1 00:01:20.498 SPDK_TEST_NVMF_MDNS=1 00:01:20.498 SPDK_RUN_UBSAN=1 00:01:20.498 NET_TYPE=virt 00:01:20.498 SPDK_JSONRPC_GO_CLIENT=1 00:01:20.498 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.505 RUN_NIGHTLY=0 00:01:20.507 [Pipeline] } 00:01:20.520 [Pipeline] // stage 00:01:20.534 [Pipeline] stage 00:01:20.536 [Pipeline] { (Run VM) 00:01:20.547 [Pipeline] sh 00:01:20.827 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.827 + echo 'Start stage prepare_nvme.sh' 00:01:20.827 Start stage prepare_nvme.sh 00:01:20.827 + [[ -n 4 ]] 00:01:20.827 + disk_prefix=ex4 00:01:20.827 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:20.827 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:20.827 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:20.827 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.827 ++ SPDK_TEST_NVMF=1 00:01:20.827 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.827 ++ SPDK_TEST_USDT=1 00:01:20.827 ++ SPDK_TEST_NVMF_MDNS=1 00:01:20.827 ++ SPDK_RUN_UBSAN=1 00:01:20.827 ++ NET_TYPE=virt 00:01:20.827 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:20.827 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.827 ++ RUN_NIGHTLY=0 00:01:20.827 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:20.827 + nvme_files=() 00:01:20.827 + declare -A nvme_files 00:01:20.827 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.827 + nvme_files['nvme.img']=5G 00:01:20.827 + nvme_files['nvme-cmb.img']=5G 00:01:20.827 + nvme_files['nvme-multi0.img']=4G 00:01:20.827 + nvme_files['nvme-multi1.img']=4G 00:01:20.827 + nvme_files['nvme-multi2.img']=4G 00:01:20.827 + nvme_files['nvme-openstack.img']=8G 00:01:20.827 + nvme_files['nvme-zns.img']=5G 00:01:20.827 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.827 + (( SPDK_TEST_FTL == 1 )) 00:01:20.827 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.827 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.827 + for nvme in "${!nvme_files[@]}" 00:01:20.827 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:20.827 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.827 + for nvme in "${!nvme_files[@]}" 00:01:20.827 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:20.827 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.827 + for nvme in "${!nvme_files[@]}" 00:01:20.827 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:20.827 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.827 + for nvme in "${!nvme_files[@]}" 00:01:20.827 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:20.827 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.827 + for nvme in "${!nvme_files[@]}" 00:01:20.827 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:20.827 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.827 + for nvme in "${!nvme_files[@]}" 00:01:20.827 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:20.827 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.827 + for nvme in "${!nvme_files[@]}" 00:01:20.827 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:21.085 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.085 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:21.085 + echo 'End stage prepare_nvme.sh' 00:01:21.085 End stage prepare_nvme.sh 00:01:21.095 [Pipeline] sh 00:01:21.371 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.371 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:01:21.371 00:01:21.371 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:21.371 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:21.371 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:21.371 HELP=0 00:01:21.371 DRY_RUN=0 00:01:21.371 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:21.371 NVME_DISKS_TYPE=nvme,nvme, 00:01:21.371 NVME_AUTO_CREATE=0 00:01:21.371 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:21.371 NVME_CMB=,, 00:01:21.371 NVME_PMR=,, 00:01:21.371 NVME_ZNS=,, 00:01:21.371 NVME_MS=,, 00:01:21.371 NVME_FDP=,, 00:01:21.371 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.371 SPDK_VAGRANT_VMCPU=10 00:01:21.371 SPDK_VAGRANT_VMRAM=12288 00:01:21.371 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.371 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:21.371 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.371 SPDK_OPENSTACK_NETWORK=0 00:01:21.371 VAGRANT_PACKAGE_BOX=0 00:01:21.371 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.371 FORCE_DISTRO=true 00:01:21.371 VAGRANT_BOX_VERSION= 00:01:21.371 EXTRA_VAGRANTFILES= 00:01:21.371 NIC_MODEL=e1000 00:01:21.371 00:01:21.371 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:01:21.371 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:24.690 Bringing machine 'default' up with 'libvirt' provider... 00:01:25.269 ==> default: Creating image (snapshot of base box volume). 00:01:25.527 ==> default: Creating domain with the following settings... 00:01:25.527 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733504467_b5ad18009df6580f8566 00:01:25.527 ==> default: -- Domain type: kvm 00:01:25.527 ==> default: -- Cpus: 10 00:01:25.527 ==> default: -- Feature: acpi 00:01:25.527 ==> default: -- Feature: apic 00:01:25.527 ==> default: -- Feature: pae 00:01:25.527 ==> default: -- Memory: 12288M 00:01:25.527 ==> default: -- Memory Backing: hugepages: 00:01:25.527 ==> default: -- Management MAC: 00:01:25.527 ==> default: -- Loader: 00:01:25.527 ==> default: -- Nvram: 00:01:25.527 ==> default: -- Base box: spdk/fedora39 00:01:25.527 ==> default: -- Storage pool: default 00:01:25.527 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733504467_b5ad18009df6580f8566.img (20G) 00:01:25.527 ==> default: -- Volume Cache: default 00:01:25.527 ==> default: -- Kernel: 00:01:25.527 ==> default: -- Initrd: 00:01:25.527 ==> default: -- Graphics Type: vnc 00:01:25.527 ==> default: -- Graphics Port: -1 00:01:25.527 ==> default: -- Graphics IP: 127.0.0.1 00:01:25.527 ==> default: -- Graphics Password: Not defined 00:01:25.527 ==> default: -- Video Type: cirrus 00:01:25.527 ==> default: -- Video VRAM: 9216 00:01:25.527 ==> default: -- Sound Type: 00:01:25.527 ==> default: -- Keymap: en-us 00:01:25.527 ==> default: -- TPM Path: 00:01:25.527 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:25.527 ==> default: -- Command line args: 00:01:25.527 ==> default: -> value=-device, 00:01:25.527 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:25.527 ==> default: -> value=-drive, 00:01:25.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:25.527 ==> default: -> value=-device, 00:01:25.527 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.527 ==> default: -> value=-device, 00:01:25.527 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:25.527 ==> default: -> value=-drive, 00:01:25.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:25.527 ==> default: -> value=-device, 00:01:25.527 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.527 ==> default: -> value=-drive, 00:01:25.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:25.527 ==> default: -> value=-device, 00:01:25.527 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.527 ==> default: -> value=-drive, 00:01:25.527 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:25.527 ==> default: -> value=-device, 00:01:25.527 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:25.527 ==> default: Creating shared folders metadata... 00:01:25.527 ==> default: Starting domain. 00:01:26.905 ==> default: Waiting for domain to get an IP address... 00:01:41.788 ==> default: Waiting for SSH to become available... 00:01:43.164 ==> default: Configuring and enabling network interfaces... 00:01:47.374 default: SSH address: 192.168.121.115:22 00:01:47.374 default: SSH username: vagrant 00:01:47.374 default: SSH auth method: private key 00:01:49.907 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:58.063 ==> default: Mounting SSHFS shared folder... 00:01:58.629 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:58.629 ==> default: Checking Mount.. 00:01:59.564 ==> default: Folder Successfully Mounted! 00:01:59.564 ==> default: Running provisioner: file... 00:02:00.498 default: ~/.gitconfig => .gitconfig 00:02:00.756 00:02:00.756 SUCCESS! 00:02:00.756 00:02:00.756 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:00.756 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:00.756 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:00.756 00:02:00.764 [Pipeline] } 00:02:00.778 [Pipeline] // stage 00:02:00.787 [Pipeline] dir 00:02:00.788 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:02:00.789 [Pipeline] { 00:02:00.802 [Pipeline] catchError 00:02:00.804 [Pipeline] { 00:02:00.817 [Pipeline] sh 00:02:01.094 + vagrant ssh-config --host vagrant 00:02:01.095 + sed -ne /^Host/,$p 00:02:01.095 + tee ssh_conf 00:02:05.274 Host vagrant 00:02:05.274 HostName 192.168.121.115 00:02:05.274 User vagrant 00:02:05.274 Port 22 00:02:05.274 UserKnownHostsFile /dev/null 00:02:05.274 StrictHostKeyChecking no 00:02:05.274 PasswordAuthentication no 00:02:05.274 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:05.274 IdentitiesOnly yes 00:02:05.274 LogLevel FATAL 00:02:05.274 ForwardAgent yes 00:02:05.274 ForwardX11 yes 00:02:05.274 00:02:05.286 [Pipeline] withEnv 00:02:05.289 [Pipeline] { 00:02:05.302 [Pipeline] sh 00:02:05.578 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:05.579 source /etc/os-release 00:02:05.579 [[ -e /image.version ]] && img=$(< /image.version) 00:02:05.579 # Minimal, systemd-like check. 00:02:05.579 if [[ -e /.dockerenv ]]; then 00:02:05.579 # Clear garbage from the node's name: 00:02:05.579 # agt-er_autotest_547-896 -> autotest_547-896 00:02:05.579 # $HOSTNAME is the actual container id 00:02:05.579 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:05.579 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:05.579 # We can assume this is a mount from a host where container is running, 00:02:05.579 # so fetch its hostname to easily identify the target swarm worker. 00:02:05.579 container="$(< /etc/hostname) ($agent)" 00:02:05.579 else 00:02:05.579 # Fallback 00:02:05.579 container=$agent 00:02:05.579 fi 00:02:05.579 fi 00:02:05.579 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:05.579 00:02:05.589 [Pipeline] } 00:02:05.604 [Pipeline] // withEnv 00:02:05.612 [Pipeline] setCustomBuildProperty 00:02:05.629 [Pipeline] stage 00:02:05.631 [Pipeline] { (Tests) 00:02:05.648 [Pipeline] sh 00:02:05.925 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:06.195 [Pipeline] sh 00:02:06.545 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:06.816 [Pipeline] timeout 00:02:06.817 Timeout set to expire in 1 hr 0 min 00:02:06.819 [Pipeline] { 00:02:06.834 [Pipeline] sh 00:02:07.111 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:07.677 HEAD is now at c7acbd6be test/iscsi_tgt: Remove support for the namespace arg 00:02:07.689 [Pipeline] sh 00:02:07.968 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:08.241 [Pipeline] sh 00:02:08.520 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:08.536 [Pipeline] sh 00:02:08.815 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:08.815 ++ readlink -f spdk_repo 00:02:08.815 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:08.815 + [[ -n /home/vagrant/spdk_repo ]] 00:02:08.815 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:08.815 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:08.815 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:08.815 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:08.815 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:08.815 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:08.815 + cd /home/vagrant/spdk_repo 00:02:08.815 + source /etc/os-release 00:02:08.815 ++ NAME='Fedora Linux' 00:02:08.815 ++ VERSION='39 (Cloud Edition)' 00:02:08.815 ++ ID=fedora 00:02:08.815 ++ VERSION_ID=39 00:02:08.815 ++ VERSION_CODENAME= 00:02:08.815 ++ PLATFORM_ID=platform:f39 00:02:08.815 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:08.815 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:08.815 ++ LOGO=fedora-logo-icon 00:02:08.815 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:08.815 ++ HOME_URL=https://fedoraproject.org/ 00:02:08.815 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:08.815 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:08.815 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:08.815 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:08.815 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:08.815 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:08.815 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:08.815 ++ SUPPORT_END=2024-11-12 00:02:08.815 ++ VARIANT='Cloud Edition' 00:02:08.815 ++ VARIANT_ID=cloud 00:02:08.815 + uname -a 00:02:08.815 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:08.815 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:09.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:09.382 Hugepages 00:02:09.382 node hugesize free / total 00:02:09.382 node0 1048576kB 0 / 0 00:02:09.382 node0 2048kB 0 / 0 00:02:09.382 00:02:09.382 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:09.382 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:09.382 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:09.382 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:09.382 + rm -f /tmp/spdk-ld-path 00:02:09.382 + source autorun-spdk.conf 00:02:09.382 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.382 ++ SPDK_TEST_NVMF=1 00:02:09.382 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.382 ++ SPDK_TEST_USDT=1 00:02:09.382 ++ SPDK_TEST_NVMF_MDNS=1 00:02:09.382 ++ SPDK_RUN_UBSAN=1 00:02:09.382 ++ NET_TYPE=virt 00:02:09.382 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:09.382 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.382 ++ RUN_NIGHTLY=0 00:02:09.382 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:09.382 + [[ -n '' ]] 00:02:09.382 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:09.642 + for M in /var/spdk/build-*-manifest.txt 00:02:09.642 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:09.642 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.642 + for M in /var/spdk/build-*-manifest.txt 00:02:09.642 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:09.642 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.642 + for M in /var/spdk/build-*-manifest.txt 00:02:09.642 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:09.642 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:09.642 ++ uname 00:02:09.642 + [[ Linux == \L\i\n\u\x ]] 00:02:09.642 + sudo dmesg -T 00:02:09.642 + sudo dmesg --clear 00:02:09.642 + dmesg_pid=5267 00:02:09.642 + sudo dmesg -Tw 00:02:09.642 + [[ Fedora Linux == FreeBSD ]] 00:02:09.642 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.642 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:09.642 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:09.642 + [[ -x /usr/src/fio-static/fio ]] 00:02:09.642 + export FIO_BIN=/usr/src/fio-static/fio 00:02:09.642 + FIO_BIN=/usr/src/fio-static/fio 00:02:09.642 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:09.642 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:09.642 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:09.642 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.642 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:09.642 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:09.642 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.642 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:09.642 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.642 17:01:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:09.642 17:01:51 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:09.642 17:01:51 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:09.642 17:01:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:09.642 17:01:51 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:09.642 17:01:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:09.642 17:01:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:09.642 17:01:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:09.642 17:01:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:09.642 17:01:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.642 17:01:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.642 17:01:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.642 17:01:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.642 17:01:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.642 17:01:51 -- paths/export.sh@5 -- $ export PATH 00:02:09.642 17:01:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.642 17:01:51 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:09.642 17:01:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:09.642 17:01:51 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733504511.XXXXXX 00:02:09.642 17:01:51 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733504511.SSZnfv 00:02:09.642 17:01:51 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:09.642 17:01:51 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:09.642 17:01:51 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:09.642 17:01:51 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:09.642 17:01:51 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:09.642 17:01:51 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:09.642 17:01:51 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:09.642 17:01:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.901 17:01:51 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:02:09.901 17:01:51 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:09.901 17:01:51 -- pm/common@17 -- $ local monitor 00:02:09.901 17:01:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.901 17:01:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.901 17:01:51 -- pm/common@25 -- $ sleep 1 00:02:09.901 17:01:51 -- pm/common@21 -- $ date +%s 00:02:09.901 17:01:51 -- pm/common@21 -- $ date +%s 00:02:09.901 17:01:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733504511 00:02:09.901 17:01:51 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733504511 00:02:09.901 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733504511_collect-vmstat.pm.log 00:02:09.901 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733504511_collect-cpu-load.pm.log 00:02:10.836 17:01:52 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:10.836 17:01:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:10.836 17:01:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:10.836 17:01:52 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:10.836 17:01:52 -- spdk/autobuild.sh@16 -- $ date -u 00:02:10.836 Fri Dec 6 05:01:52 PM UTC 2024 00:02:10.836 17:01:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:10.836 v25.01-pre-304-gc7acbd6be 00:02:10.836 17:01:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:10.836 17:01:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:10.836 17:01:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:10.836 17:01:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:10.836 17:01:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:10.836 17:01:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:10.836 ************************************ 00:02:10.836 START TEST ubsan 00:02:10.836 ************************************ 00:02:10.836 using ubsan 00:02:10.836 17:01:52 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:10.836 00:02:10.836 real 0m0.000s 00:02:10.836 user 0m0.000s 00:02:10.836 sys 0m0.000s 00:02:10.836 17:01:52 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:10.836 ************************************ 00:02:10.836 17:01:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:10.836 END TEST ubsan 00:02:10.836 ************************************ 00:02:10.836 17:01:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:10.837 17:01:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:10.837 17:01:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:10.837 17:01:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:10.837 17:01:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:10.837 17:01:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:10.837 17:01:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:10.837 17:01:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:10.837 17:01:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:02:10.837 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:10.837 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:11.403 Using 'verbs' RDMA provider 00:02:24.534 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:39.400 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:39.400 go version go1.21.1 linux/amd64 00:02:39.400 Creating mk/config.mk...done. 00:02:39.400 Creating mk/cc.flags.mk...done. 00:02:39.400 Type 'make' to build. 00:02:39.400 17:02:19 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:39.400 17:02:19 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:39.400 17:02:19 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:39.400 17:02:19 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.400 ************************************ 00:02:39.400 START TEST make 00:02:39.400 ************************************ 00:02:39.400 17:02:19 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:39.400 make[1]: Nothing to be done for 'all'. 00:02:54.274 The Meson build system 00:02:54.274 Version: 1.5.0 00:02:54.274 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:54.274 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:54.274 Build type: native build 00:02:54.274 Program cat found: YES (/usr/bin/cat) 00:02:54.274 Project name: DPDK 00:02:54.274 Project version: 24.03.0 00:02:54.274 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:54.274 C linker for the host machine: cc ld.bfd 2.40-14 00:02:54.274 Host machine cpu family: x86_64 00:02:54.274 Host machine cpu: x86_64 00:02:54.274 Message: ## Building in Developer Mode ## 00:02:54.274 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:54.274 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:54.274 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:54.274 Program python3 found: YES (/usr/bin/python3) 00:02:54.274 Program cat found: YES (/usr/bin/cat) 00:02:54.274 Compiler for C supports arguments -march=native: YES 00:02:54.274 Checking for size of "void *" : 8 00:02:54.274 Checking for size of "void *" : 8 (cached) 00:02:54.274 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:54.274 Library m found: YES 00:02:54.274 Library numa found: YES 00:02:54.274 Has header "numaif.h" : YES 00:02:54.274 Library fdt found: NO 00:02:54.274 Library execinfo found: NO 00:02:54.274 Has header "execinfo.h" : YES 00:02:54.274 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:54.274 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:54.274 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:54.274 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:54.274 Run-time dependency openssl found: YES 3.1.1 00:02:54.274 Run-time dependency libpcap found: YES 1.10.4 00:02:54.274 Has header "pcap.h" with dependency libpcap: YES 00:02:54.274 Compiler for C supports arguments -Wcast-qual: YES 00:02:54.274 Compiler for C supports arguments -Wdeprecated: YES 00:02:54.274 Compiler for C supports arguments -Wformat: YES 00:02:54.274 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:54.274 Compiler for C supports arguments -Wformat-security: NO 00:02:54.274 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:54.274 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:54.274 Compiler for C supports arguments -Wnested-externs: YES 00:02:54.274 Compiler for C supports arguments -Wold-style-definition: YES 00:02:54.274 Compiler for C supports arguments -Wpointer-arith: YES 00:02:54.274 Compiler for C supports arguments -Wsign-compare: YES 00:02:54.274 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:54.274 Compiler for C supports arguments -Wundef: YES 00:02:54.274 Compiler for C supports arguments -Wwrite-strings: YES 00:02:54.274 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:54.274 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:54.274 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:54.274 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:54.274 Program objdump found: YES (/usr/bin/objdump) 00:02:54.274 Compiler for C supports arguments -mavx512f: YES 00:02:54.274 Checking if "AVX512 checking" compiles: YES 00:02:54.274 Fetching value of define "__SSE4_2__" : 1 00:02:54.274 Fetching value of define "__AES__" : 1 00:02:54.274 Fetching value of define "__AVX__" : 1 00:02:54.274 Fetching value of define "__AVX2__" : 1 00:02:54.274 Fetching value of define "__AVX512BW__" : (undefined) 00:02:54.274 Fetching value of define "__AVX512CD__" : (undefined) 00:02:54.274 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:54.274 Fetching value of define "__AVX512F__" : (undefined) 00:02:54.274 Fetching value of define "__AVX512VL__" : (undefined) 00:02:54.274 Fetching value of define "__PCLMUL__" : 1 00:02:54.274 Fetching value of define "__RDRND__" : 1 00:02:54.274 Fetching value of define "__RDSEED__" : 1 00:02:54.274 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:54.274 Fetching value of define "__znver1__" : (undefined) 00:02:54.274 Fetching value of define "__znver2__" : (undefined) 00:02:54.274 Fetching value of define "__znver3__" : (undefined) 00:02:54.274 Fetching value of define "__znver4__" : (undefined) 00:02:54.274 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:54.274 Message: lib/log: Defining dependency "log" 00:02:54.274 Message: lib/kvargs: Defining dependency "kvargs" 00:02:54.274 Message: lib/telemetry: Defining dependency "telemetry" 00:02:54.274 Checking for function "getentropy" : NO 00:02:54.274 Message: lib/eal: Defining dependency "eal" 00:02:54.274 Message: lib/ring: Defining dependency "ring" 00:02:54.274 Message: lib/rcu: Defining dependency "rcu" 00:02:54.274 Message: lib/mempool: Defining dependency "mempool" 00:02:54.274 Message: lib/mbuf: Defining dependency "mbuf" 00:02:54.274 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:54.274 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:54.274 Compiler for C supports arguments -mpclmul: YES 00:02:54.274 Compiler for C supports arguments -maes: YES 00:02:54.274 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:54.274 Compiler for C supports arguments -mavx512bw: YES 00:02:54.274 Compiler for C supports arguments -mavx512dq: YES 00:02:54.274 Compiler for C supports arguments -mavx512vl: YES 00:02:54.274 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:54.274 Compiler for C supports arguments -mavx2: YES 00:02:54.274 Compiler for C supports arguments -mavx: YES 00:02:54.274 Message: lib/net: Defining dependency "net" 00:02:54.274 Message: lib/meter: Defining dependency "meter" 00:02:54.274 Message: lib/ethdev: Defining dependency "ethdev" 00:02:54.274 Message: lib/pci: Defining dependency "pci" 00:02:54.274 Message: lib/cmdline: Defining dependency "cmdline" 00:02:54.274 Message: lib/hash: Defining dependency "hash" 00:02:54.274 Message: lib/timer: Defining dependency "timer" 00:02:54.274 Message: lib/compressdev: Defining dependency "compressdev" 00:02:54.274 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:54.274 Message: lib/dmadev: Defining dependency "dmadev" 00:02:54.274 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:54.274 Message: lib/power: Defining dependency "power" 00:02:54.274 Message: lib/reorder: Defining dependency "reorder" 00:02:54.274 Message: lib/security: Defining dependency "security" 00:02:54.274 Has header "linux/userfaultfd.h" : YES 00:02:54.274 Has header "linux/vduse.h" : YES 00:02:54.274 Message: lib/vhost: Defining dependency "vhost" 00:02:54.274 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:54.274 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:54.274 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:54.274 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:54.274 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:54.274 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:54.275 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:54.275 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:54.275 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:54.275 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:54.275 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:54.275 Configuring doxy-api-html.conf using configuration 00:02:54.275 Configuring doxy-api-man.conf using configuration 00:02:54.275 Program mandb found: YES (/usr/bin/mandb) 00:02:54.275 Program sphinx-build found: NO 00:02:54.275 Configuring rte_build_config.h using configuration 00:02:54.275 Message: 00:02:54.275 ================= 00:02:54.275 Applications Enabled 00:02:54.275 ================= 00:02:54.275 00:02:54.275 apps: 00:02:54.275 00:02:54.275 00:02:54.275 Message: 00:02:54.275 ================= 00:02:54.275 Libraries Enabled 00:02:54.275 ================= 00:02:54.275 00:02:54.275 libs: 00:02:54.275 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:54.275 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:54.275 cryptodev, dmadev, power, reorder, security, vhost, 00:02:54.275 00:02:54.275 Message: 00:02:54.275 =============== 00:02:54.275 Drivers Enabled 00:02:54.275 =============== 00:02:54.275 00:02:54.275 common: 00:02:54.275 00:02:54.275 bus: 00:02:54.275 pci, vdev, 00:02:54.275 mempool: 00:02:54.275 ring, 00:02:54.275 dma: 00:02:54.275 00:02:54.275 net: 00:02:54.275 00:02:54.275 crypto: 00:02:54.275 00:02:54.275 compress: 00:02:54.275 00:02:54.275 vdpa: 00:02:54.275 00:02:54.275 00:02:54.275 Message: 00:02:54.275 ================= 00:02:54.275 Content Skipped 00:02:54.275 ================= 00:02:54.275 00:02:54.275 apps: 00:02:54.275 dumpcap: explicitly disabled via build config 00:02:54.275 graph: explicitly disabled via build config 00:02:54.275 pdump: explicitly disabled via build config 00:02:54.275 proc-info: explicitly disabled via build config 00:02:54.275 test-acl: explicitly disabled via build config 00:02:54.275 test-bbdev: explicitly disabled via build config 00:02:54.275 test-cmdline: explicitly disabled via build config 00:02:54.275 test-compress-perf: explicitly disabled via build config 00:02:54.275 test-crypto-perf: explicitly disabled via build config 00:02:54.275 test-dma-perf: explicitly disabled via build config 00:02:54.275 test-eventdev: explicitly disabled via build config 00:02:54.275 test-fib: explicitly disabled via build config 00:02:54.275 test-flow-perf: explicitly disabled via build config 00:02:54.275 test-gpudev: explicitly disabled via build config 00:02:54.275 test-mldev: explicitly disabled via build config 00:02:54.275 test-pipeline: explicitly disabled via build config 00:02:54.275 test-pmd: explicitly disabled via build config 00:02:54.275 test-regex: explicitly disabled via build config 00:02:54.275 test-sad: explicitly disabled via build config 00:02:54.275 test-security-perf: explicitly disabled via build config 00:02:54.275 00:02:54.275 libs: 00:02:54.275 argparse: explicitly disabled via build config 00:02:54.275 metrics: explicitly disabled via build config 00:02:54.275 acl: explicitly disabled via build config 00:02:54.275 bbdev: explicitly disabled via build config 00:02:54.275 bitratestats: explicitly disabled via build config 00:02:54.275 bpf: explicitly disabled via build config 00:02:54.275 cfgfile: explicitly disabled via build config 00:02:54.275 distributor: explicitly disabled via build config 00:02:54.275 efd: explicitly disabled via build config 00:02:54.275 eventdev: explicitly disabled via build config 00:02:54.275 dispatcher: explicitly disabled via build config 00:02:54.275 gpudev: explicitly disabled via build config 00:02:54.275 gro: explicitly disabled via build config 00:02:54.275 gso: explicitly disabled via build config 00:02:54.275 ip_frag: explicitly disabled via build config 00:02:54.275 jobstats: explicitly disabled via build config 00:02:54.275 latencystats: explicitly disabled via build config 00:02:54.275 lpm: explicitly disabled via build config 00:02:54.275 member: explicitly disabled via build config 00:02:54.275 pcapng: explicitly disabled via build config 00:02:54.275 rawdev: explicitly disabled via build config 00:02:54.275 regexdev: explicitly disabled via build config 00:02:54.275 mldev: explicitly disabled via build config 00:02:54.275 rib: explicitly disabled via build config 00:02:54.275 sched: explicitly disabled via build config 00:02:54.275 stack: explicitly disabled via build config 00:02:54.275 ipsec: explicitly disabled via build config 00:02:54.275 pdcp: explicitly disabled via build config 00:02:54.275 fib: explicitly disabled via build config 00:02:54.275 port: explicitly disabled via build config 00:02:54.275 pdump: explicitly disabled via build config 00:02:54.275 table: explicitly disabled via build config 00:02:54.275 pipeline: explicitly disabled via build config 00:02:54.275 graph: explicitly disabled via build config 00:02:54.275 node: explicitly disabled via build config 00:02:54.275 00:02:54.275 drivers: 00:02:54.275 common/cpt: not in enabled drivers build config 00:02:54.275 common/dpaax: not in enabled drivers build config 00:02:54.275 common/iavf: not in enabled drivers build config 00:02:54.275 common/idpf: not in enabled drivers build config 00:02:54.275 common/ionic: not in enabled drivers build config 00:02:54.275 common/mvep: not in enabled drivers build config 00:02:54.275 common/octeontx: not in enabled drivers build config 00:02:54.275 bus/auxiliary: not in enabled drivers build config 00:02:54.275 bus/cdx: not in enabled drivers build config 00:02:54.275 bus/dpaa: not in enabled drivers build config 00:02:54.275 bus/fslmc: not in enabled drivers build config 00:02:54.275 bus/ifpga: not in enabled drivers build config 00:02:54.275 bus/platform: not in enabled drivers build config 00:02:54.275 bus/uacce: not in enabled drivers build config 00:02:54.275 bus/vmbus: not in enabled drivers build config 00:02:54.275 common/cnxk: not in enabled drivers build config 00:02:54.275 common/mlx5: not in enabled drivers build config 00:02:54.275 common/nfp: not in enabled drivers build config 00:02:54.275 common/nitrox: not in enabled drivers build config 00:02:54.275 common/qat: not in enabled drivers build config 00:02:54.275 common/sfc_efx: not in enabled drivers build config 00:02:54.275 mempool/bucket: not in enabled drivers build config 00:02:54.275 mempool/cnxk: not in enabled drivers build config 00:02:54.275 mempool/dpaa: not in enabled drivers build config 00:02:54.275 mempool/dpaa2: not in enabled drivers build config 00:02:54.275 mempool/octeontx: not in enabled drivers build config 00:02:54.275 mempool/stack: not in enabled drivers build config 00:02:54.275 dma/cnxk: not in enabled drivers build config 00:02:54.275 dma/dpaa: not in enabled drivers build config 00:02:54.275 dma/dpaa2: not in enabled drivers build config 00:02:54.275 dma/hisilicon: not in enabled drivers build config 00:02:54.275 dma/idxd: not in enabled drivers build config 00:02:54.275 dma/ioat: not in enabled drivers build config 00:02:54.275 dma/skeleton: not in enabled drivers build config 00:02:54.275 net/af_packet: not in enabled drivers build config 00:02:54.275 net/af_xdp: not in enabled drivers build config 00:02:54.275 net/ark: not in enabled drivers build config 00:02:54.275 net/atlantic: not in enabled drivers build config 00:02:54.275 net/avp: not in enabled drivers build config 00:02:54.275 net/axgbe: not in enabled drivers build config 00:02:54.275 net/bnx2x: not in enabled drivers build config 00:02:54.275 net/bnxt: not in enabled drivers build config 00:02:54.275 net/bonding: not in enabled drivers build config 00:02:54.275 net/cnxk: not in enabled drivers build config 00:02:54.275 net/cpfl: not in enabled drivers build config 00:02:54.275 net/cxgbe: not in enabled drivers build config 00:02:54.275 net/dpaa: not in enabled drivers build config 00:02:54.275 net/dpaa2: not in enabled drivers build config 00:02:54.275 net/e1000: not in enabled drivers build config 00:02:54.275 net/ena: not in enabled drivers build config 00:02:54.275 net/enetc: not in enabled drivers build config 00:02:54.275 net/enetfec: not in enabled drivers build config 00:02:54.275 net/enic: not in enabled drivers build config 00:02:54.275 net/failsafe: not in enabled drivers build config 00:02:54.275 net/fm10k: not in enabled drivers build config 00:02:54.275 net/gve: not in enabled drivers build config 00:02:54.275 net/hinic: not in enabled drivers build config 00:02:54.275 net/hns3: not in enabled drivers build config 00:02:54.275 net/i40e: not in enabled drivers build config 00:02:54.275 net/iavf: not in enabled drivers build config 00:02:54.275 net/ice: not in enabled drivers build config 00:02:54.275 net/idpf: not in enabled drivers build config 00:02:54.275 net/igc: not in enabled drivers build config 00:02:54.275 net/ionic: not in enabled drivers build config 00:02:54.275 net/ipn3ke: not in enabled drivers build config 00:02:54.275 net/ixgbe: not in enabled drivers build config 00:02:54.275 net/mana: not in enabled drivers build config 00:02:54.276 net/memif: not in enabled drivers build config 00:02:54.276 net/mlx4: not in enabled drivers build config 00:02:54.276 net/mlx5: not in enabled drivers build config 00:02:54.276 net/mvneta: not in enabled drivers build config 00:02:54.276 net/mvpp2: not in enabled drivers build config 00:02:54.276 net/netvsc: not in enabled drivers build config 00:02:54.276 net/nfb: not in enabled drivers build config 00:02:54.276 net/nfp: not in enabled drivers build config 00:02:54.276 net/ngbe: not in enabled drivers build config 00:02:54.276 net/null: not in enabled drivers build config 00:02:54.276 net/octeontx: not in enabled drivers build config 00:02:54.276 net/octeon_ep: not in enabled drivers build config 00:02:54.276 net/pcap: not in enabled drivers build config 00:02:54.276 net/pfe: not in enabled drivers build config 00:02:54.276 net/qede: not in enabled drivers build config 00:02:54.276 net/ring: not in enabled drivers build config 00:02:54.276 net/sfc: not in enabled drivers build config 00:02:54.276 net/softnic: not in enabled drivers build config 00:02:54.276 net/tap: not in enabled drivers build config 00:02:54.276 net/thunderx: not in enabled drivers build config 00:02:54.276 net/txgbe: not in enabled drivers build config 00:02:54.276 net/vdev_netvsc: not in enabled drivers build config 00:02:54.276 net/vhost: not in enabled drivers build config 00:02:54.276 net/virtio: not in enabled drivers build config 00:02:54.276 net/vmxnet3: not in enabled drivers build config 00:02:54.276 raw/*: missing internal dependency, "rawdev" 00:02:54.276 crypto/armv8: not in enabled drivers build config 00:02:54.276 crypto/bcmfs: not in enabled drivers build config 00:02:54.276 crypto/caam_jr: not in enabled drivers build config 00:02:54.276 crypto/ccp: not in enabled drivers build config 00:02:54.276 crypto/cnxk: not in enabled drivers build config 00:02:54.276 crypto/dpaa_sec: not in enabled drivers build config 00:02:54.276 crypto/dpaa2_sec: not in enabled drivers build config 00:02:54.276 crypto/ipsec_mb: not in enabled drivers build config 00:02:54.276 crypto/mlx5: not in enabled drivers build config 00:02:54.276 crypto/mvsam: not in enabled drivers build config 00:02:54.276 crypto/nitrox: not in enabled drivers build config 00:02:54.276 crypto/null: not in enabled drivers build config 00:02:54.276 crypto/octeontx: not in enabled drivers build config 00:02:54.276 crypto/openssl: not in enabled drivers build config 00:02:54.276 crypto/scheduler: not in enabled drivers build config 00:02:54.276 crypto/uadk: not in enabled drivers build config 00:02:54.276 crypto/virtio: not in enabled drivers build config 00:02:54.276 compress/isal: not in enabled drivers build config 00:02:54.276 compress/mlx5: not in enabled drivers build config 00:02:54.276 compress/nitrox: not in enabled drivers build config 00:02:54.276 compress/octeontx: not in enabled drivers build config 00:02:54.276 compress/zlib: not in enabled drivers build config 00:02:54.276 regex/*: missing internal dependency, "regexdev" 00:02:54.276 ml/*: missing internal dependency, "mldev" 00:02:54.276 vdpa/ifc: not in enabled drivers build config 00:02:54.276 vdpa/mlx5: not in enabled drivers build config 00:02:54.276 vdpa/nfp: not in enabled drivers build config 00:02:54.276 vdpa/sfc: not in enabled drivers build config 00:02:54.276 event/*: missing internal dependency, "eventdev" 00:02:54.276 baseband/*: missing internal dependency, "bbdev" 00:02:54.276 gpu/*: missing internal dependency, "gpudev" 00:02:54.276 00:02:54.276 00:02:54.276 Build targets in project: 85 00:02:54.276 00:02:54.276 DPDK 24.03.0 00:02:54.276 00:02:54.276 User defined options 00:02:54.276 buildtype : debug 00:02:54.276 default_library : shared 00:02:54.276 libdir : lib 00:02:54.276 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:54.276 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:54.276 c_link_args : 00:02:54.276 cpu_instruction_set: native 00:02:54.276 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:54.276 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:54.276 enable_docs : false 00:02:54.276 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:54.276 enable_kmods : false 00:02:54.276 max_lcores : 128 00:02:54.276 tests : false 00:02:54.276 00:02:54.276 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.276 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:54.276 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:54.276 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:54.534 [3/268] Linking static target lib/librte_kvargs.a 00:02:54.534 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:54.534 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:54.534 [6/268] Linking static target lib/librte_log.a 00:02:55.100 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.100 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:55.100 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:55.100 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:55.100 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:55.100 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:55.359 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:55.359 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:55.359 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:55.359 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:55.359 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.359 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:55.359 [19/268] Linking static target lib/librte_telemetry.a 00:02:55.617 [20/268] Linking target lib/librte_log.so.24.1 00:02:55.876 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:55.876 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:55.876 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:56.133 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:56.133 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:56.133 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:56.133 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:56.133 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:56.133 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:56.391 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:56.391 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:56.391 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.391 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:56.650 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:56.650 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:56.650 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:56.907 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:56.907 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:56.907 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:57.166 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:57.166 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:57.166 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:57.166 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.166 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.166 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:57.424 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:57.424 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:57.682 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:57.940 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:57.940 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:57.940 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:58.198 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:58.198 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.198 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.198 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:58.198 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:58.198 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:58.457 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:58.715 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.715 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.715 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.715 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.715 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:58.715 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.715 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:58.973 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:58.973 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:59.231 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:59.489 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:59.489 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:59.489 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:59.489 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:59.489 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:59.747 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:59.747 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.747 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:59.747 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:59.747 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:00.005 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:00.005 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:00.263 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:00.263 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:00.263 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:00.520 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:00.521 [85/268] Linking static target lib/librte_ring.a 00:03:00.521 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:00.521 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.778 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:00.778 [89/268] Linking static target lib/librte_eal.a 00:03:00.778 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:01.034 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:01.034 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.034 [93/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:01.034 [94/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:01.291 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:01.547 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:01.547 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:01.547 [98/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:01.547 [99/268] Linking static target lib/librte_mempool.a 00:03:01.547 [100/268] Linking static target lib/librte_rcu.a 00:03:01.547 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:01.547 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:01.547 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:01.805 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:01.805 [105/268] Linking static target lib/librte_mbuf.a 00:03:01.805 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:02.063 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:02.063 [108/268] Linking static target lib/librte_net.a 00:03:02.063 [109/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.063 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:02.063 [111/268] Linking static target lib/librte_meter.a 00:03:02.319 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:02.319 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:02.576 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.576 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:02.576 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.833 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.833 [118/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.090 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:03.090 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:03.348 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:03.348 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:03.606 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:03.864 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:03.864 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:03.864 [126/268] Linking static target lib/librte_pci.a 00:03:03.864 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:04.122 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:04.122 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:04.122 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:04.122 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:04.122 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:04.380 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:04.380 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:04.380 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.380 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:04.380 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:04.380 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:04.380 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:04.380 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:04.380 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:04.380 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:04.380 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:04.380 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:04.638 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:04.638 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:04.638 [147/268] Linking static target lib/librte_ethdev.a 00:03:04.896 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:05.155 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:05.155 [150/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:05.155 [151/268] Linking static target lib/librte_cmdline.a 00:03:05.414 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:05.414 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:05.672 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:05.672 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:05.672 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:05.672 [157/268] Linking static target lib/librte_timer.a 00:03:05.672 [158/268] Linking static target lib/librte_hash.a 00:03:05.672 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:05.929 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:05.929 [161/268] Linking static target lib/librte_compressdev.a 00:03:06.188 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:06.188 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:06.446 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:06.446 [165/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.012 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:07.012 [167/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:07.012 [168/268] Linking static target lib/librte_cryptodev.a 00:03:07.012 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:07.012 [170/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:07.012 [171/268] Linking static target lib/librte_dmadev.a 00:03:07.012 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.271 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:07.271 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.271 [175/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:07.271 [176/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.530 [177/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:07.792 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:08.050 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:08.050 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.308 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:08.308 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:08.308 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:08.308 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:08.308 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:08.308 [186/268] Linking static target lib/librte_power.a 00:03:08.566 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:08.566 [188/268] Linking static target lib/librte_reorder.a 00:03:09.130 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:09.130 [190/268] Linking static target lib/librte_security.a 00:03:09.130 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:09.387 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.643 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:09.643 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:09.901 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:10.159 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.159 [197/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.159 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.417 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:10.417 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:10.982 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:10.982 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:11.240 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:11.240 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:11.240 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:11.499 [206/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:11.757 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:11.757 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:11.757 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:11.758 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:11.758 [211/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:11.758 [212/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:11.758 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:12.016 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:12.016 [215/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:12.016 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:12.016 [217/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:12.016 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:12.016 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:12.016 [220/268] Linking static target drivers/librte_bus_pci.a 00:03:12.274 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.274 [222/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:12.274 [223/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:12.533 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:12.533 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:12.533 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:12.533 [227/268] Linking static target drivers/librte_mempool_ring.a 00:03:12.533 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.911 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.911 [230/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:13.911 [231/268] Linking target lib/librte_eal.so.24.1 00:03:13.911 [232/268] Linking static target lib/librte_vhost.a 00:03:13.911 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:13.911 [234/268] Linking target lib/librte_ring.so.24.1 00:03:13.911 [235/268] Linking target lib/librte_timer.so.24.1 00:03:13.911 [236/268] Linking target lib/librte_dmadev.so.24.1 00:03:13.911 [237/268] Linking target lib/librte_meter.so.24.1 00:03:13.911 [238/268] Linking target lib/librte_pci.so.24.1 00:03:13.911 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:14.169 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:14.169 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:14.169 [242/268] Linking target lib/librte_rcu.so.24.1 00:03:14.169 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:14.169 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:14.169 [245/268] Linking target lib/librte_mempool.so.24.1 00:03:14.169 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:14.169 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:14.427 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.427 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:14.427 [250/268] Linking target lib/librte_mbuf.so.24.1 00:03:14.427 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:14.427 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:14.685 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:14.685 [254/268] Linking target lib/librte_net.so.24.1 00:03:14.685 [255/268] Linking target lib/librte_reorder.so.24.1 00:03:14.685 [256/268] Linking target lib/librte_compressdev.so.24.1 00:03:14.685 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:03:14.685 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:14.943 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:14.943 [260/268] Linking target lib/librte_hash.so.24.1 00:03:14.943 [261/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:14.943 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:14.943 [263/268] Linking target lib/librte_security.so.24.1 00:03:14.943 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:14.943 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:15.211 [266/268] Linking target lib/librte_power.so.24.1 00:03:15.489 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.489 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:15.489 INFO: autodetecting backend as ninja 00:03:15.489 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:47.607 CC lib/ut_mock/mock.o 00:03:47.607 CC lib/ut/ut.o 00:03:47.607 CC lib/log/log.o 00:03:47.607 CC lib/log/log_flags.o 00:03:47.607 CC lib/log/log_deprecated.o 00:03:47.607 LIB libspdk_ut.a 00:03:47.607 SO libspdk_ut.so.2.0 00:03:47.607 LIB libspdk_ut_mock.a 00:03:47.607 LIB libspdk_log.a 00:03:47.607 SO libspdk_ut_mock.so.6.0 00:03:47.607 SYMLINK libspdk_ut.so 00:03:47.607 SO libspdk_log.so.7.1 00:03:47.607 SYMLINK libspdk_ut_mock.so 00:03:47.607 SYMLINK libspdk_log.so 00:03:47.607 CC lib/ioat/ioat.o 00:03:47.607 CC lib/util/base64.o 00:03:47.607 CC lib/util/bit_array.o 00:03:47.607 CC lib/util/cpuset.o 00:03:47.607 CC lib/util/crc16.o 00:03:47.607 CC lib/util/crc32.o 00:03:47.607 CC lib/util/crc32c.o 00:03:47.607 CXX lib/trace_parser/trace.o 00:03:47.607 CC lib/dma/dma.o 00:03:47.607 CC lib/vfio_user/host/vfio_user_pci.o 00:03:47.607 CC lib/util/crc32_ieee.o 00:03:47.607 CC lib/util/crc64.o 00:03:47.607 CC lib/vfio_user/host/vfio_user.o 00:03:47.607 CC lib/util/dif.o 00:03:47.607 CC lib/util/fd.o 00:03:47.607 CC lib/util/fd_group.o 00:03:47.607 LIB libspdk_dma.a 00:03:47.607 SO libspdk_dma.so.5.0 00:03:47.607 LIB libspdk_ioat.a 00:03:47.607 SO libspdk_ioat.so.7.0 00:03:47.607 CC lib/util/file.o 00:03:47.607 SYMLINK libspdk_dma.so 00:03:47.607 CC lib/util/hexlify.o 00:03:47.607 CC lib/util/iov.o 00:03:47.607 CC lib/util/math.o 00:03:47.607 CC lib/util/net.o 00:03:47.607 SYMLINK libspdk_ioat.so 00:03:47.607 CC lib/util/pipe.o 00:03:47.607 LIB libspdk_vfio_user.a 00:03:47.607 SO libspdk_vfio_user.so.5.0 00:03:47.607 CC lib/util/strerror_tls.o 00:03:47.607 CC lib/util/string.o 00:03:47.607 SYMLINK libspdk_vfio_user.so 00:03:47.607 CC lib/util/uuid.o 00:03:47.607 CC lib/util/xor.o 00:03:47.607 CC lib/util/zipf.o 00:03:47.607 CC lib/util/md5.o 00:03:47.607 LIB libspdk_util.a 00:03:47.607 SO libspdk_util.so.10.1 00:03:47.607 LIB libspdk_trace_parser.a 00:03:47.607 SYMLINK libspdk_util.so 00:03:47.607 SO libspdk_trace_parser.so.6.0 00:03:47.607 SYMLINK libspdk_trace_parser.so 00:03:47.607 CC lib/json/json_parse.o 00:03:47.607 CC lib/json/json_util.o 00:03:47.607 CC lib/idxd/idxd.o 00:03:47.607 CC lib/json/json_write.o 00:03:47.607 CC lib/idxd/idxd_user.o 00:03:47.607 CC lib/rdma_utils/rdma_utils.o 00:03:47.607 CC lib/idxd/idxd_kernel.o 00:03:47.607 CC lib/conf/conf.o 00:03:47.607 CC lib/vmd/vmd.o 00:03:47.607 CC lib/env_dpdk/env.o 00:03:47.607 CC lib/vmd/led.o 00:03:47.607 CC lib/env_dpdk/memory.o 00:03:47.607 CC lib/env_dpdk/pci.o 00:03:47.607 LIB libspdk_rdma_utils.a 00:03:47.607 LIB libspdk_json.a 00:03:47.607 SO libspdk_rdma_utils.so.1.0 00:03:47.607 LIB libspdk_conf.a 00:03:47.607 CC lib/env_dpdk/init.o 00:03:47.607 SO libspdk_json.so.6.0 00:03:47.607 SO libspdk_conf.so.6.0 00:03:47.607 CC lib/env_dpdk/threads.o 00:03:47.607 SYMLINK libspdk_rdma_utils.so 00:03:47.607 CC lib/env_dpdk/pci_ioat.o 00:03:47.607 SYMLINK libspdk_json.so 00:03:47.607 SYMLINK libspdk_conf.so 00:03:47.607 CC lib/env_dpdk/pci_virtio.o 00:03:47.607 CC lib/env_dpdk/pci_vmd.o 00:03:47.607 CC lib/rdma_provider/common.o 00:03:47.607 CC lib/jsonrpc/jsonrpc_server.o 00:03:47.607 LIB libspdk_idxd.a 00:03:47.607 CC lib/env_dpdk/pci_idxd.o 00:03:47.607 SO libspdk_idxd.so.12.1 00:03:47.607 LIB libspdk_vmd.a 00:03:47.607 CC lib/env_dpdk/pci_event.o 00:03:47.607 SYMLINK libspdk_idxd.so 00:03:47.607 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:47.607 SO libspdk_vmd.so.6.0 00:03:47.607 CC lib/env_dpdk/sigbus_handler.o 00:03:47.607 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:47.607 SYMLINK libspdk_vmd.so 00:03:47.607 CC lib/jsonrpc/jsonrpc_client.o 00:03:47.607 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:47.607 CC lib/env_dpdk/pci_dpdk.o 00:03:47.607 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:47.607 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:47.607 LIB libspdk_rdma_provider.a 00:03:47.607 SO libspdk_rdma_provider.so.7.0 00:03:47.607 LIB libspdk_jsonrpc.a 00:03:47.607 SYMLINK libspdk_rdma_provider.so 00:03:47.607 SO libspdk_jsonrpc.so.6.0 00:03:47.607 SYMLINK libspdk_jsonrpc.so 00:03:47.607 CC lib/rpc/rpc.o 00:03:47.607 LIB libspdk_env_dpdk.a 00:03:47.607 SO libspdk_env_dpdk.so.15.1 00:03:47.607 LIB libspdk_rpc.a 00:03:47.607 SO libspdk_rpc.so.6.0 00:03:47.607 SYMLINK libspdk_env_dpdk.so 00:03:47.607 SYMLINK libspdk_rpc.so 00:03:47.607 CC lib/keyring/keyring_rpc.o 00:03:47.607 CC lib/keyring/keyring.o 00:03:47.607 CC lib/trace/trace.o 00:03:47.607 CC lib/trace/trace_flags.o 00:03:47.607 CC lib/trace/trace_rpc.o 00:03:47.607 CC lib/notify/notify.o 00:03:47.607 CC lib/notify/notify_rpc.o 00:03:47.607 LIB libspdk_notify.a 00:03:47.607 SO libspdk_notify.so.6.0 00:03:47.607 LIB libspdk_keyring.a 00:03:47.607 LIB libspdk_trace.a 00:03:47.607 SYMLINK libspdk_notify.so 00:03:47.607 SO libspdk_keyring.so.2.0 00:03:47.607 SO libspdk_trace.so.11.0 00:03:47.607 SYMLINK libspdk_keyring.so 00:03:47.607 SYMLINK libspdk_trace.so 00:03:47.865 CC lib/thread/thread.o 00:03:47.865 CC lib/thread/iobuf.o 00:03:47.865 CC lib/sock/sock.o 00:03:47.865 CC lib/sock/sock_rpc.o 00:03:48.432 LIB libspdk_sock.a 00:03:48.432 SO libspdk_sock.so.10.0 00:03:48.432 SYMLINK libspdk_sock.so 00:03:48.691 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:48.691 CC lib/nvme/nvme_ctrlr.o 00:03:48.691 CC lib/nvme/nvme_fabric.o 00:03:48.691 CC lib/nvme/nvme_ns_cmd.o 00:03:48.691 CC lib/nvme/nvme_ns.o 00:03:48.691 CC lib/nvme/nvme_pcie_common.o 00:03:48.691 CC lib/nvme/nvme_qpair.o 00:03:48.691 CC lib/nvme/nvme.o 00:03:48.691 CC lib/nvme/nvme_pcie.o 00:03:49.689 CC lib/nvme/nvme_quirks.o 00:03:49.689 LIB libspdk_thread.a 00:03:49.689 CC lib/nvme/nvme_transport.o 00:03:49.689 SO libspdk_thread.so.11.0 00:03:49.689 CC lib/nvme/nvme_discovery.o 00:03:49.689 SYMLINK libspdk_thread.so 00:03:49.689 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.689 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.689 CC lib/nvme/nvme_tcp.o 00:03:49.689 CC lib/nvme/nvme_opal.o 00:03:49.689 CC lib/nvme/nvme_io_msg.o 00:03:49.948 CC lib/nvme/nvme_poll_group.o 00:03:50.206 CC lib/nvme/nvme_zns.o 00:03:50.206 CC lib/nvme/nvme_stubs.o 00:03:50.206 CC lib/nvme/nvme_auth.o 00:03:50.206 CC lib/nvme/nvme_cuse.o 00:03:50.464 CC lib/nvme/nvme_rdma.o 00:03:50.464 CC lib/accel/accel.o 00:03:50.722 CC lib/accel/accel_rpc.o 00:03:50.722 CC lib/blob/blobstore.o 00:03:50.722 CC lib/accel/accel_sw.o 00:03:50.979 CC lib/blob/request.o 00:03:50.979 CC lib/blob/zeroes.o 00:03:51.237 CC lib/blob/blob_bs_dev.o 00:03:51.237 CC lib/init/json_config.o 00:03:51.237 CC lib/init/subsystem.o 00:03:51.237 CC lib/init/subsystem_rpc.o 00:03:51.495 CC lib/init/rpc.o 00:03:51.495 CC lib/virtio/virtio.o 00:03:51.495 CC lib/virtio/virtio_vhost_user.o 00:03:51.495 CC lib/fsdev/fsdev.o 00:03:51.495 CC lib/fsdev/fsdev_io.o 00:03:51.495 CC lib/fsdev/fsdev_rpc.o 00:03:51.495 CC lib/virtio/virtio_vfio_user.o 00:03:51.495 LIB libspdk_init.a 00:03:51.753 SO libspdk_init.so.6.0 00:03:51.753 LIB libspdk_accel.a 00:03:51.753 CC lib/virtio/virtio_pci.o 00:03:51.753 SO libspdk_accel.so.16.0 00:03:51.753 SYMLINK libspdk_init.so 00:03:51.753 SYMLINK libspdk_accel.so 00:03:52.011 LIB libspdk_nvme.a 00:03:52.011 CC lib/event/app.o 00:03:52.011 CC lib/event/reactor.o 00:03:52.011 CC lib/event/log_rpc.o 00:03:52.011 CC lib/event/app_rpc.o 00:03:52.011 CC lib/event/scheduler_static.o 00:03:52.011 CC lib/bdev/bdev.o 00:03:52.011 LIB libspdk_virtio.a 00:03:52.011 SO libspdk_virtio.so.7.0 00:03:52.011 SO libspdk_nvme.so.15.0 00:03:52.011 LIB libspdk_fsdev.a 00:03:52.269 CC lib/bdev/bdev_rpc.o 00:03:52.269 SO libspdk_fsdev.so.2.0 00:03:52.269 SYMLINK libspdk_virtio.so 00:03:52.269 CC lib/bdev/bdev_zone.o 00:03:52.269 CC lib/bdev/part.o 00:03:52.269 CC lib/bdev/scsi_nvme.o 00:03:52.269 SYMLINK libspdk_fsdev.so 00:03:52.269 SYMLINK libspdk_nvme.so 00:03:52.527 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:52.527 LIB libspdk_event.a 00:03:52.527 SO libspdk_event.so.14.0 00:03:52.527 SYMLINK libspdk_event.so 00:03:53.094 LIB libspdk_fuse_dispatcher.a 00:03:53.094 SO libspdk_fuse_dispatcher.so.1.0 00:03:53.094 SYMLINK libspdk_fuse_dispatcher.so 00:03:53.660 LIB libspdk_blob.a 00:03:53.918 SO libspdk_blob.so.12.0 00:03:53.918 SYMLINK libspdk_blob.so 00:03:54.176 CC lib/lvol/lvol.o 00:03:54.176 CC lib/blobfs/tree.o 00:03:54.176 CC lib/blobfs/blobfs.o 00:03:54.741 LIB libspdk_bdev.a 00:03:54.998 SO libspdk_bdev.so.17.0 00:03:54.998 SYMLINK libspdk_bdev.so 00:03:54.998 LIB libspdk_blobfs.a 00:03:55.256 SO libspdk_blobfs.so.11.0 00:03:55.256 LIB libspdk_lvol.a 00:03:55.256 SYMLINK libspdk_blobfs.so 00:03:55.256 SO libspdk_lvol.so.11.0 00:03:55.256 CC lib/scsi/dev.o 00:03:55.256 CC lib/scsi/lun.o 00:03:55.256 CC lib/scsi/port.o 00:03:55.256 CC lib/ftl/ftl_core.o 00:03:55.256 CC lib/nbd/nbd.o 00:03:55.256 CC lib/nvmf/ctrlr.o 00:03:55.256 CC lib/nbd/nbd_rpc.o 00:03:55.256 CC lib/scsi/scsi.o 00:03:55.256 CC lib/ublk/ublk.o 00:03:55.256 SYMLINK libspdk_lvol.so 00:03:55.256 CC lib/scsi/scsi_bdev.o 00:03:55.513 CC lib/scsi/scsi_pr.o 00:03:55.513 CC lib/scsi/scsi_rpc.o 00:03:55.513 CC lib/ftl/ftl_init.o 00:03:55.513 CC lib/scsi/task.o 00:03:55.513 CC lib/ftl/ftl_layout.o 00:03:55.771 CC lib/ftl/ftl_debug.o 00:03:55.771 LIB libspdk_nbd.a 00:03:55.771 CC lib/ftl/ftl_io.o 00:03:55.771 CC lib/ftl/ftl_sb.o 00:03:55.771 SO libspdk_nbd.so.7.0 00:03:55.771 CC lib/ftl/ftl_l2p.o 00:03:55.771 CC lib/ftl/ftl_l2p_flat.o 00:03:55.771 SYMLINK libspdk_nbd.so 00:03:55.771 CC lib/ftl/ftl_nv_cache.o 00:03:55.771 LIB libspdk_scsi.a 00:03:55.771 CC lib/ftl/ftl_band.o 00:03:56.028 CC lib/ublk/ublk_rpc.o 00:03:56.028 SO libspdk_scsi.so.9.0 00:03:56.028 CC lib/nvmf/ctrlr_discovery.o 00:03:56.028 CC lib/ftl/ftl_band_ops.o 00:03:56.028 SYMLINK libspdk_scsi.so 00:03:56.028 CC lib/ftl/ftl_writer.o 00:03:56.028 CC lib/ftl/ftl_rq.o 00:03:56.028 CC lib/nvmf/ctrlr_bdev.o 00:03:56.028 CC lib/ftl/ftl_reloc.o 00:03:56.028 LIB libspdk_ublk.a 00:03:56.028 SO libspdk_ublk.so.3.0 00:03:56.285 SYMLINK libspdk_ublk.so 00:03:56.285 CC lib/ftl/ftl_l2p_cache.o 00:03:56.285 CC lib/nvmf/subsystem.o 00:03:56.285 CC lib/ftl/ftl_p2l.o 00:03:56.285 CC lib/nvmf/nvmf.o 00:03:56.542 CC lib/nvmf/nvmf_rpc.o 00:03:56.542 CC lib/iscsi/conn.o 00:03:56.542 CC lib/vhost/vhost.o 00:03:56.542 CC lib/vhost/vhost_rpc.o 00:03:56.800 CC lib/nvmf/transport.o 00:03:56.800 CC lib/nvmf/tcp.o 00:03:56.800 CC lib/ftl/ftl_p2l_log.o 00:03:57.057 CC lib/nvmf/stubs.o 00:03:57.057 CC lib/iscsi/init_grp.o 00:03:57.057 CC lib/ftl/mngt/ftl_mngt.o 00:03:57.314 CC lib/nvmf/mdns_server.o 00:03:57.314 CC lib/vhost/vhost_scsi.o 00:03:57.314 CC lib/vhost/vhost_blk.o 00:03:57.314 CC lib/iscsi/iscsi.o 00:03:57.571 CC lib/nvmf/rdma.o 00:03:57.571 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:57.571 CC lib/nvmf/auth.o 00:03:57.571 CC lib/vhost/rte_vhost_user.o 00:03:57.571 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:57.829 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:57.829 CC lib/iscsi/param.o 00:03:57.829 CC lib/iscsi/portal_grp.o 00:03:57.829 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.106 CC lib/iscsi/tgt_node.o 00:03:58.106 CC lib/iscsi/iscsi_subsystem.o 00:03:58.366 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.366 CC lib/iscsi/iscsi_rpc.o 00:03:58.366 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.366 CC lib/iscsi/task.o 00:03:58.366 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.625 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.625 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.625 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.625 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.625 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:58.625 CC lib/ftl/utils/ftl_conf.o 00:03:58.625 LIB libspdk_vhost.a 00:03:58.625 CC lib/ftl/utils/ftl_md.o 00:03:58.968 SO libspdk_vhost.so.8.0 00:03:58.968 CC lib/ftl/utils/ftl_mempool.o 00:03:58.968 CC lib/ftl/utils/ftl_bitmap.o 00:03:58.968 LIB libspdk_iscsi.a 00:03:58.968 CC lib/ftl/utils/ftl_property.o 00:03:58.968 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:58.968 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:58.968 SYMLINK libspdk_vhost.so 00:03:58.968 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:58.968 SO libspdk_iscsi.so.8.0 00:03:58.968 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:58.968 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.233 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.233 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:59.233 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.233 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.233 SYMLINK libspdk_iscsi.so 00:03:59.233 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.233 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.233 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:59.233 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:59.233 CC lib/ftl/base/ftl_base_dev.o 00:03:59.233 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.233 CC lib/ftl/ftl_trace.o 00:03:59.493 LIB libspdk_ftl.a 00:03:59.493 LIB libspdk_nvmf.a 00:03:59.752 SO libspdk_nvmf.so.20.0 00:03:59.752 SO libspdk_ftl.so.9.0 00:04:00.011 SYMLINK libspdk_nvmf.so 00:04:00.270 SYMLINK libspdk_ftl.so 00:04:00.528 CC module/env_dpdk/env_dpdk_rpc.o 00:04:00.528 CC module/scheduler/gscheduler/gscheduler.o 00:04:00.528 CC module/accel/ioat/accel_ioat.o 00:04:00.528 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:00.528 CC module/fsdev/aio/fsdev_aio.o 00:04:00.528 CC module/sock/posix/posix.o 00:04:00.528 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:00.528 CC module/blob/bdev/blob_bdev.o 00:04:00.528 CC module/accel/error/accel_error.o 00:04:00.528 CC module/keyring/file/keyring.o 00:04:00.528 LIB libspdk_env_dpdk_rpc.a 00:04:00.528 SO libspdk_env_dpdk_rpc.so.6.0 00:04:00.785 SYMLINK libspdk_env_dpdk_rpc.so 00:04:00.785 CC module/accel/error/accel_error_rpc.o 00:04:00.785 LIB libspdk_scheduler_dpdk_governor.a 00:04:00.785 LIB libspdk_scheduler_gscheduler.a 00:04:00.785 CC module/keyring/file/keyring_rpc.o 00:04:00.785 LIB libspdk_scheduler_dynamic.a 00:04:00.785 CC module/accel/ioat/accel_ioat_rpc.o 00:04:00.785 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:00.785 SO libspdk_scheduler_gscheduler.so.4.0 00:04:00.785 SYMLINK libspdk_scheduler_gscheduler.so 00:04:00.785 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:00.785 SO libspdk_scheduler_dynamic.so.4.0 00:04:00.785 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:00.785 LIB libspdk_blob_bdev.a 00:04:00.785 LIB libspdk_accel_error.a 00:04:00.785 SO libspdk_blob_bdev.so.12.0 00:04:00.785 SO libspdk_accel_error.so.2.0 00:04:00.785 SYMLINK libspdk_scheduler_dynamic.so 00:04:01.042 LIB libspdk_accel_ioat.a 00:04:01.042 CC module/fsdev/aio/linux_aio_mgr.o 00:04:01.042 LIB libspdk_keyring_file.a 00:04:01.042 SO libspdk_accel_ioat.so.6.0 00:04:01.042 SYMLINK libspdk_blob_bdev.so 00:04:01.042 SO libspdk_keyring_file.so.2.0 00:04:01.042 SYMLINK libspdk_accel_error.so 00:04:01.042 CC module/keyring/linux/keyring.o 00:04:01.042 SYMLINK libspdk_keyring_file.so 00:04:01.042 CC module/keyring/linux/keyring_rpc.o 00:04:01.042 SYMLINK libspdk_accel_ioat.so 00:04:01.042 CC module/accel/dsa/accel_dsa.o 00:04:01.300 LIB libspdk_keyring_linux.a 00:04:01.300 CC module/accel/iaa/accel_iaa.o 00:04:01.300 SO libspdk_keyring_linux.so.1.0 00:04:01.300 CC module/blobfs/bdev/blobfs_bdev.o 00:04:01.300 SYMLINK libspdk_keyring_linux.so 00:04:01.300 CC module/bdev/delay/vbdev_delay.o 00:04:01.300 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:01.300 CC module/bdev/error/vbdev_error.o 00:04:01.300 CC module/bdev/gpt/gpt.o 00:04:01.300 LIB libspdk_sock_posix.a 00:04:01.300 CC module/bdev/lvol/vbdev_lvol.o 00:04:01.300 LIB libspdk_fsdev_aio.a 00:04:01.300 SO libspdk_sock_posix.so.6.0 00:04:01.300 SO libspdk_fsdev_aio.so.1.0 00:04:01.300 CC module/accel/iaa/accel_iaa_rpc.o 00:04:01.300 CC module/accel/dsa/accel_dsa_rpc.o 00:04:01.557 SYMLINK libspdk_sock_posix.so 00:04:01.557 LIB libspdk_blobfs_bdev.a 00:04:01.557 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:01.557 SYMLINK libspdk_fsdev_aio.so 00:04:01.557 CC module/bdev/error/vbdev_error_rpc.o 00:04:01.557 SO libspdk_blobfs_bdev.so.6.0 00:04:01.557 CC module/bdev/gpt/vbdev_gpt.o 00:04:01.557 LIB libspdk_accel_iaa.a 00:04:01.557 LIB libspdk_accel_dsa.a 00:04:01.558 SYMLINK libspdk_blobfs_bdev.so 00:04:01.558 SO libspdk_accel_iaa.so.3.0 00:04:01.558 SO libspdk_accel_dsa.so.5.0 00:04:01.558 CC module/bdev/malloc/bdev_malloc.o 00:04:01.558 SYMLINK libspdk_accel_iaa.so 00:04:01.558 LIB libspdk_bdev_error.a 00:04:01.558 SYMLINK libspdk_accel_dsa.so 00:04:01.558 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:01.815 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:01.815 SO libspdk_bdev_error.so.6.0 00:04:01.815 CC module/bdev/nvme/bdev_nvme.o 00:04:01.815 CC module/bdev/null/bdev_null.o 00:04:01.815 SYMLINK libspdk_bdev_error.so 00:04:01.815 LIB libspdk_bdev_gpt.a 00:04:01.815 CC module/bdev/passthru/vbdev_passthru.o 00:04:01.815 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:01.815 SO libspdk_bdev_gpt.so.6.0 00:04:01.815 LIB libspdk_bdev_delay.a 00:04:01.815 LIB libspdk_bdev_lvol.a 00:04:01.815 SO libspdk_bdev_delay.so.6.0 00:04:01.815 SYMLINK libspdk_bdev_gpt.so 00:04:02.073 SO libspdk_bdev_lvol.so.6.0 00:04:02.073 CC module/bdev/raid/bdev_raid.o 00:04:02.073 SYMLINK libspdk_bdev_delay.so 00:04:02.073 CC module/bdev/raid/bdev_raid_rpc.o 00:04:02.073 SYMLINK libspdk_bdev_lvol.so 00:04:02.073 CC module/bdev/null/bdev_null_rpc.o 00:04:02.073 LIB libspdk_bdev_malloc.a 00:04:02.073 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:02.073 SO libspdk_bdev_malloc.so.6.0 00:04:02.073 CC module/bdev/split/vbdev_split.o 00:04:02.073 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:02.073 SYMLINK libspdk_bdev_malloc.so 00:04:02.073 LIB libspdk_bdev_passthru.a 00:04:02.331 SO libspdk_bdev_passthru.so.6.0 00:04:02.331 LIB libspdk_bdev_null.a 00:04:02.331 CC module/bdev/aio/bdev_aio.o 00:04:02.331 CC module/bdev/aio/bdev_aio_rpc.o 00:04:02.331 SO libspdk_bdev_null.so.6.0 00:04:02.331 SYMLINK libspdk_bdev_passthru.so 00:04:02.331 CC module/bdev/nvme/nvme_rpc.o 00:04:02.331 CC module/bdev/ftl/bdev_ftl.o 00:04:02.331 SYMLINK libspdk_bdev_null.so 00:04:02.331 CC module/bdev/split/vbdev_split_rpc.o 00:04:02.331 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:02.331 CC module/bdev/raid/bdev_raid_sb.o 00:04:02.588 LIB libspdk_bdev_split.a 00:04:02.588 CC module/bdev/nvme/bdev_mdns_client.o 00:04:02.588 LIB libspdk_bdev_zone_block.a 00:04:02.588 SO libspdk_bdev_split.so.6.0 00:04:02.588 SO libspdk_bdev_zone_block.so.6.0 00:04:02.588 LIB libspdk_bdev_aio.a 00:04:02.588 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:02.588 SYMLINK libspdk_bdev_split.so 00:04:02.588 SO libspdk_bdev_aio.so.6.0 00:04:02.588 CC module/bdev/raid/raid0.o 00:04:02.588 SYMLINK libspdk_bdev_zone_block.so 00:04:02.588 SYMLINK libspdk_bdev_aio.so 00:04:02.588 CC module/bdev/nvme/vbdev_opal.o 00:04:02.588 CC module/bdev/raid/raid1.o 00:04:02.847 CC module/bdev/raid/concat.o 00:04:02.847 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:02.847 CC module/bdev/iscsi/bdev_iscsi.o 00:04:02.847 LIB libspdk_bdev_ftl.a 00:04:02.847 SO libspdk_bdev_ftl.so.6.0 00:04:02.847 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:02.847 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:03.114 SYMLINK libspdk_bdev_ftl.so 00:04:03.115 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:03.115 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:03.115 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:03.115 LIB libspdk_bdev_raid.a 00:04:03.115 SO libspdk_bdev_raid.so.6.0 00:04:03.115 SYMLINK libspdk_bdev_raid.so 00:04:03.115 LIB libspdk_bdev_iscsi.a 00:04:03.378 SO libspdk_bdev_iscsi.so.6.0 00:04:03.378 SYMLINK libspdk_bdev_iscsi.so 00:04:03.378 LIB libspdk_bdev_virtio.a 00:04:03.636 SO libspdk_bdev_virtio.so.6.0 00:04:03.636 SYMLINK libspdk_bdev_virtio.so 00:04:04.573 LIB libspdk_bdev_nvme.a 00:04:04.573 SO libspdk_bdev_nvme.so.7.1 00:04:04.573 SYMLINK libspdk_bdev_nvme.so 00:04:05.139 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:05.139 CC module/event/subsystems/iobuf/iobuf.o 00:04:05.139 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:05.139 CC module/event/subsystems/fsdev/fsdev.o 00:04:05.139 CC module/event/subsystems/scheduler/scheduler.o 00:04:05.139 CC module/event/subsystems/sock/sock.o 00:04:05.139 CC module/event/subsystems/keyring/keyring.o 00:04:05.139 CC module/event/subsystems/vmd/vmd.o 00:04:05.139 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:05.397 LIB libspdk_event_keyring.a 00:04:05.397 LIB libspdk_event_vhost_blk.a 00:04:05.397 LIB libspdk_event_scheduler.a 00:04:05.397 SO libspdk_event_keyring.so.1.0 00:04:05.397 LIB libspdk_event_fsdev.a 00:04:05.397 SO libspdk_event_vhost_blk.so.3.0 00:04:05.397 LIB libspdk_event_sock.a 00:04:05.397 SO libspdk_event_scheduler.so.4.0 00:04:05.397 SO libspdk_event_fsdev.so.1.0 00:04:05.397 SO libspdk_event_sock.so.5.0 00:04:05.397 SYMLINK libspdk_event_keyring.so 00:04:05.397 SYMLINK libspdk_event_vhost_blk.so 00:04:05.397 LIB libspdk_event_iobuf.a 00:04:05.397 LIB libspdk_event_vmd.a 00:04:05.397 SYMLINK libspdk_event_scheduler.so 00:04:05.397 SYMLINK libspdk_event_fsdev.so 00:04:05.397 SO libspdk_event_iobuf.so.3.0 00:04:05.397 SO libspdk_event_vmd.so.6.0 00:04:05.397 SYMLINK libspdk_event_sock.so 00:04:05.397 SYMLINK libspdk_event_iobuf.so 00:04:05.397 SYMLINK libspdk_event_vmd.so 00:04:05.656 CC module/event/subsystems/accel/accel.o 00:04:05.915 LIB libspdk_event_accel.a 00:04:05.915 SO libspdk_event_accel.so.6.0 00:04:05.915 SYMLINK libspdk_event_accel.so 00:04:06.174 CC module/event/subsystems/bdev/bdev.o 00:04:06.433 LIB libspdk_event_bdev.a 00:04:06.433 SO libspdk_event_bdev.so.6.0 00:04:06.433 SYMLINK libspdk_event_bdev.so 00:04:06.692 CC module/event/subsystems/nbd/nbd.o 00:04:06.692 CC module/event/subsystems/scsi/scsi.o 00:04:06.692 CC module/event/subsystems/ublk/ublk.o 00:04:06.692 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:06.692 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:06.951 LIB libspdk_event_nbd.a 00:04:06.951 LIB libspdk_event_ublk.a 00:04:06.951 SO libspdk_event_nbd.so.6.0 00:04:06.951 SO libspdk_event_ublk.so.3.0 00:04:06.951 LIB libspdk_event_scsi.a 00:04:06.951 SO libspdk_event_scsi.so.6.0 00:04:06.951 SYMLINK libspdk_event_nbd.so 00:04:06.951 SYMLINK libspdk_event_ublk.so 00:04:06.951 LIB libspdk_event_nvmf.a 00:04:06.951 SYMLINK libspdk_event_scsi.so 00:04:06.951 SO libspdk_event_nvmf.so.6.0 00:04:07.210 SYMLINK libspdk_event_nvmf.so 00:04:07.210 CC module/event/subsystems/iscsi/iscsi.o 00:04:07.210 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:07.468 LIB libspdk_event_vhost_scsi.a 00:04:07.468 LIB libspdk_event_iscsi.a 00:04:07.468 SO libspdk_event_vhost_scsi.so.3.0 00:04:07.468 SO libspdk_event_iscsi.so.6.0 00:04:07.469 SYMLINK libspdk_event_vhost_scsi.so 00:04:07.469 SYMLINK libspdk_event_iscsi.so 00:04:07.727 SO libspdk.so.6.0 00:04:07.727 SYMLINK libspdk.so 00:04:07.985 CXX app/trace/trace.o 00:04:07.985 CC app/spdk_lspci/spdk_lspci.o 00:04:07.985 CC app/trace_record/trace_record.o 00:04:07.985 CC app/spdk_tgt/spdk_tgt.o 00:04:07.985 CC app/iscsi_tgt/iscsi_tgt.o 00:04:07.985 CC app/nvmf_tgt/nvmf_main.o 00:04:07.985 CC examples/ioat/perf/perf.o 00:04:07.985 CC examples/util/zipf/zipf.o 00:04:07.985 CC test/thread/poller_perf/poller_perf.o 00:04:08.244 CC test/dma/test_dma/test_dma.o 00:04:08.244 LINK spdk_lspci 00:04:08.244 LINK poller_perf 00:04:08.244 LINK zipf 00:04:08.244 LINK nvmf_tgt 00:04:08.502 LINK iscsi_tgt 00:04:08.502 LINK spdk_tgt 00:04:08.502 LINK ioat_perf 00:04:08.502 LINK spdk_trace_record 00:04:08.502 LINK spdk_trace 00:04:08.502 CC app/spdk_nvme_perf/perf.o 00:04:08.761 TEST_HEADER include/spdk/accel.h 00:04:08.761 TEST_HEADER include/spdk/accel_module.h 00:04:08.761 TEST_HEADER include/spdk/assert.h 00:04:08.761 TEST_HEADER include/spdk/barrier.h 00:04:08.761 TEST_HEADER include/spdk/base64.h 00:04:08.761 TEST_HEADER include/spdk/bdev.h 00:04:08.761 TEST_HEADER include/spdk/bdev_module.h 00:04:08.761 TEST_HEADER include/spdk/bdev_zone.h 00:04:08.761 TEST_HEADER include/spdk/bit_array.h 00:04:08.761 TEST_HEADER include/spdk/bit_pool.h 00:04:08.761 TEST_HEADER include/spdk/blob_bdev.h 00:04:08.761 CC examples/ioat/verify/verify.o 00:04:08.761 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:08.761 TEST_HEADER include/spdk/blobfs.h 00:04:08.761 TEST_HEADER include/spdk/blob.h 00:04:08.761 TEST_HEADER include/spdk/conf.h 00:04:08.761 CC app/spdk_nvme_identify/identify.o 00:04:08.761 TEST_HEADER include/spdk/config.h 00:04:08.761 TEST_HEADER include/spdk/cpuset.h 00:04:08.761 TEST_HEADER include/spdk/crc16.h 00:04:08.761 TEST_HEADER include/spdk/crc32.h 00:04:08.761 TEST_HEADER include/spdk/crc64.h 00:04:08.761 TEST_HEADER include/spdk/dif.h 00:04:08.761 LINK test_dma 00:04:08.761 TEST_HEADER include/spdk/dma.h 00:04:08.761 TEST_HEADER include/spdk/endian.h 00:04:08.761 TEST_HEADER include/spdk/env_dpdk.h 00:04:08.761 TEST_HEADER include/spdk/env.h 00:04:08.761 TEST_HEADER include/spdk/event.h 00:04:08.761 TEST_HEADER include/spdk/fd_group.h 00:04:08.761 TEST_HEADER include/spdk/fd.h 00:04:08.761 TEST_HEADER include/spdk/file.h 00:04:08.761 TEST_HEADER include/spdk/fsdev.h 00:04:08.761 TEST_HEADER include/spdk/fsdev_module.h 00:04:08.761 TEST_HEADER include/spdk/ftl.h 00:04:08.761 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:08.761 CC app/spdk_top/spdk_top.o 00:04:08.761 TEST_HEADER include/spdk/gpt_spec.h 00:04:08.761 TEST_HEADER include/spdk/hexlify.h 00:04:08.761 CC app/spdk_nvme_discover/discovery_aer.o 00:04:08.761 TEST_HEADER include/spdk/histogram_data.h 00:04:08.761 TEST_HEADER include/spdk/idxd.h 00:04:08.761 TEST_HEADER include/spdk/idxd_spec.h 00:04:08.761 TEST_HEADER include/spdk/init.h 00:04:08.761 TEST_HEADER include/spdk/ioat.h 00:04:08.761 CC test/app/bdev_svc/bdev_svc.o 00:04:08.761 TEST_HEADER include/spdk/ioat_spec.h 00:04:08.761 TEST_HEADER include/spdk/iscsi_spec.h 00:04:08.761 TEST_HEADER include/spdk/json.h 00:04:08.761 TEST_HEADER include/spdk/jsonrpc.h 00:04:08.761 TEST_HEADER include/spdk/keyring.h 00:04:08.761 TEST_HEADER include/spdk/keyring_module.h 00:04:08.761 TEST_HEADER include/spdk/likely.h 00:04:08.761 TEST_HEADER include/spdk/log.h 00:04:08.761 TEST_HEADER include/spdk/lvol.h 00:04:08.761 TEST_HEADER include/spdk/md5.h 00:04:08.761 TEST_HEADER include/spdk/memory.h 00:04:08.761 TEST_HEADER include/spdk/mmio.h 00:04:08.761 TEST_HEADER include/spdk/nbd.h 00:04:08.761 TEST_HEADER include/spdk/net.h 00:04:09.019 TEST_HEADER include/spdk/notify.h 00:04:09.019 TEST_HEADER include/spdk/nvme.h 00:04:09.019 TEST_HEADER include/spdk/nvme_intel.h 00:04:09.019 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:09.019 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:09.019 TEST_HEADER include/spdk/nvme_spec.h 00:04:09.019 TEST_HEADER include/spdk/nvme_zns.h 00:04:09.019 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:09.019 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:09.019 TEST_HEADER include/spdk/nvmf.h 00:04:09.019 TEST_HEADER include/spdk/nvmf_spec.h 00:04:09.019 TEST_HEADER include/spdk/nvmf_transport.h 00:04:09.019 TEST_HEADER include/spdk/opal.h 00:04:09.019 TEST_HEADER include/spdk/opal_spec.h 00:04:09.019 TEST_HEADER include/spdk/pci_ids.h 00:04:09.019 TEST_HEADER include/spdk/pipe.h 00:04:09.019 TEST_HEADER include/spdk/queue.h 00:04:09.019 TEST_HEADER include/spdk/reduce.h 00:04:09.019 TEST_HEADER include/spdk/rpc.h 00:04:09.019 TEST_HEADER include/spdk/scheduler.h 00:04:09.019 TEST_HEADER include/spdk/scsi.h 00:04:09.019 TEST_HEADER include/spdk/scsi_spec.h 00:04:09.019 TEST_HEADER include/spdk/sock.h 00:04:09.019 TEST_HEADER include/spdk/stdinc.h 00:04:09.019 TEST_HEADER include/spdk/string.h 00:04:09.019 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:09.019 TEST_HEADER include/spdk/thread.h 00:04:09.019 TEST_HEADER include/spdk/trace.h 00:04:09.019 TEST_HEADER include/spdk/trace_parser.h 00:04:09.019 TEST_HEADER include/spdk/tree.h 00:04:09.019 TEST_HEADER include/spdk/ublk.h 00:04:09.019 TEST_HEADER include/spdk/util.h 00:04:09.019 TEST_HEADER include/spdk/uuid.h 00:04:09.019 TEST_HEADER include/spdk/version.h 00:04:09.019 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:09.019 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:09.019 TEST_HEADER include/spdk/vhost.h 00:04:09.019 TEST_HEADER include/spdk/vmd.h 00:04:09.019 TEST_HEADER include/spdk/xor.h 00:04:09.019 TEST_HEADER include/spdk/zipf.h 00:04:09.019 CXX test/cpp_headers/accel.o 00:04:09.019 CXX test/cpp_headers/accel_module.o 00:04:09.019 LINK bdev_svc 00:04:09.019 LINK verify 00:04:09.277 LINK spdk_nvme_discover 00:04:09.277 LINK interrupt_tgt 00:04:09.277 CXX test/cpp_headers/assert.o 00:04:09.277 CC app/spdk_dd/spdk_dd.o 00:04:09.535 CC test/app/histogram_perf/histogram_perf.o 00:04:09.535 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:09.535 CXX test/cpp_headers/barrier.o 00:04:09.535 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:09.535 LINK histogram_perf 00:04:09.535 CC test/app/jsoncat/jsoncat.o 00:04:09.535 LINK spdk_nvme_perf 00:04:09.792 CXX test/cpp_headers/base64.o 00:04:09.792 LINK jsoncat 00:04:09.792 LINK spdk_dd 00:04:09.792 LINK spdk_top 00:04:09.792 LINK spdk_nvme_identify 00:04:09.792 CC test/app/stub/stub.o 00:04:09.792 CXX test/cpp_headers/bdev.o 00:04:10.048 LINK nvme_fuzz 00:04:10.048 CC test/env/mem_callbacks/mem_callbacks.o 00:04:10.048 LINK stub 00:04:10.305 CC examples/thread/thread/thread_ex.o 00:04:10.305 CXX test/cpp_headers/bdev_module.o 00:04:10.305 CC examples/sock/hello_world/hello_sock.o 00:04:10.305 CC examples/vmd/lsvmd/lsvmd.o 00:04:10.305 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:10.305 CC app/fio/nvme/fio_plugin.o 00:04:10.563 LINK lsvmd 00:04:10.563 CXX test/cpp_headers/bdev_zone.o 00:04:10.563 LINK thread 00:04:10.563 LINK hello_sock 00:04:10.563 CC test/event/event_perf/event_perf.o 00:04:10.563 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:10.820 CXX test/cpp_headers/bit_array.o 00:04:10.820 CC examples/vmd/led/led.o 00:04:10.820 CXX test/cpp_headers/bit_pool.o 00:04:10.820 LINK event_perf 00:04:10.820 LINK mem_callbacks 00:04:10.820 CC app/fio/bdev/fio_plugin.o 00:04:11.078 LINK led 00:04:11.078 CXX test/cpp_headers/blob_bdev.o 00:04:11.078 LINK spdk_nvme 00:04:11.078 CC test/event/reactor/reactor.o 00:04:11.078 CC test/event/reactor_perf/reactor_perf.o 00:04:11.078 CC test/env/vtophys/vtophys.o 00:04:11.078 LINK vhost_fuzz 00:04:11.078 CXX test/cpp_headers/blobfs_bdev.o 00:04:11.335 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:11.335 LINK reactor_perf 00:04:11.335 LINK reactor 00:04:11.335 LINK vtophys 00:04:11.335 CC app/vhost/vhost.o 00:04:11.335 CXX test/cpp_headers/blobfs.o 00:04:11.335 CC test/env/memory/memory_ut.o 00:04:11.335 LINK env_dpdk_post_init 00:04:11.335 LINK spdk_bdev 00:04:11.592 CC test/event/app_repeat/app_repeat.o 00:04:11.592 CC test/rpc_client/rpc_client_test.o 00:04:11.592 LINK iscsi_fuzz 00:04:11.592 CXX test/cpp_headers/blob.o 00:04:11.592 LINK vhost 00:04:11.592 CC test/nvme/aer/aer.o 00:04:11.592 LINK app_repeat 00:04:11.848 CC test/env/pci/pci_ut.o 00:04:11.848 CC test/event/scheduler/scheduler.o 00:04:11.848 LINK rpc_client_test 00:04:11.848 CXX test/cpp_headers/conf.o 00:04:12.105 LINK aer 00:04:12.105 CC test/nvme/reset/reset.o 00:04:12.105 LINK scheduler 00:04:12.105 CXX test/cpp_headers/config.o 00:04:12.105 CC test/blobfs/mkfs/mkfs.o 00:04:12.105 CC test/accel/dif/dif.o 00:04:12.105 CXX test/cpp_headers/cpuset.o 00:04:12.362 CC test/nvme/sgl/sgl.o 00:04:12.362 LINK pci_ut 00:04:12.362 CXX test/cpp_headers/crc16.o 00:04:12.362 CC test/lvol/esnap/esnap.o 00:04:12.362 LINK mkfs 00:04:12.620 LINK reset 00:04:12.620 CC examples/idxd/perf/perf.o 00:04:12.620 CXX test/cpp_headers/crc32.o 00:04:12.620 LINK sgl 00:04:12.620 CC test/nvme/e2edp/nvme_dp.o 00:04:12.878 CXX test/cpp_headers/crc64.o 00:04:12.878 CC test/nvme/overhead/overhead.o 00:04:12.878 LINK memory_ut 00:04:12.878 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:12.878 CXX test/cpp_headers/dif.o 00:04:12.878 LINK dif 00:04:12.878 LINK idxd_perf 00:04:13.135 LINK nvme_dp 00:04:13.393 CC test/nvme/err_injection/err_injection.o 00:04:13.393 CXX test/cpp_headers/dma.o 00:04:13.393 LINK overhead 00:04:13.393 CXX test/cpp_headers/endian.o 00:04:13.393 LINK hello_fsdev 00:04:13.650 CC examples/accel/perf/accel_perf.o 00:04:13.650 CXX test/cpp_headers/env_dpdk.o 00:04:13.650 CC test/nvme/startup/startup.o 00:04:13.650 CC examples/blob/hello_world/hello_blob.o 00:04:13.650 LINK err_injection 00:04:13.650 CC test/nvme/reserve/reserve.o 00:04:13.908 CXX test/cpp_headers/env.o 00:04:13.908 LINK startup 00:04:13.908 CC examples/blob/cli/blobcli.o 00:04:13.908 LINK hello_blob 00:04:14.166 CC test/nvme/simple_copy/simple_copy.o 00:04:14.166 CXX test/cpp_headers/event.o 00:04:14.166 LINK reserve 00:04:14.166 CC examples/nvme/hello_world/hello_world.o 00:04:14.166 LINK accel_perf 00:04:14.166 CXX test/cpp_headers/fd_group.o 00:04:14.425 CC test/nvme/connect_stress/connect_stress.o 00:04:14.425 LINK simple_copy 00:04:14.425 LINK hello_world 00:04:14.425 CC test/nvme/boot_partition/boot_partition.o 00:04:14.425 CXX test/cpp_headers/fd.o 00:04:14.425 CC test/bdev/bdevio/bdevio.o 00:04:14.683 CC examples/nvme/reconnect/reconnect.o 00:04:14.683 CXX test/cpp_headers/file.o 00:04:14.683 LINK blobcli 00:04:14.683 LINK connect_stress 00:04:14.683 LINK boot_partition 00:04:14.683 CC test/nvme/compliance/nvme_compliance.o 00:04:14.941 CXX test/cpp_headers/fsdev.o 00:04:14.941 CC examples/bdev/hello_world/hello_bdev.o 00:04:14.941 CC test/nvme/fused_ordering/fused_ordering.o 00:04:14.941 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:14.941 LINK bdevio 00:04:14.941 LINK reconnect 00:04:14.941 CXX test/cpp_headers/fsdev_module.o 00:04:14.941 CC examples/bdev/bdevperf/bdevperf.o 00:04:15.198 CXX test/cpp_headers/ftl.o 00:04:15.198 LINK doorbell_aers 00:04:15.198 LINK nvme_compliance 00:04:15.198 LINK fused_ordering 00:04:15.199 LINK hello_bdev 00:04:15.199 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:15.199 CC test/nvme/fdp/fdp.o 00:04:15.457 CXX test/cpp_headers/fuse_dispatcher.o 00:04:15.457 CC test/nvme/cuse/cuse.o 00:04:15.457 CC examples/nvme/arbitration/arbitration.o 00:04:15.457 CC examples/nvme/hotplug/hotplug.o 00:04:15.457 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:15.716 CXX test/cpp_headers/gpt_spec.o 00:04:15.716 LINK fdp 00:04:15.716 LINK hotplug 00:04:15.716 LINK nvme_manage 00:04:15.974 LINK arbitration 00:04:15.974 CXX test/cpp_headers/hexlify.o 00:04:15.974 LINK cmb_copy 00:04:15.974 LINK bdevperf 00:04:15.974 CXX test/cpp_headers/histogram_data.o 00:04:15.974 CXX test/cpp_headers/idxd.o 00:04:15.974 CXX test/cpp_headers/idxd_spec.o 00:04:16.232 CXX test/cpp_headers/init.o 00:04:16.232 CXX test/cpp_headers/ioat.o 00:04:16.232 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:16.232 CC examples/nvme/abort/abort.o 00:04:16.232 CXX test/cpp_headers/ioat_spec.o 00:04:16.232 CXX test/cpp_headers/iscsi_spec.o 00:04:16.232 CXX test/cpp_headers/json.o 00:04:16.232 CXX test/cpp_headers/jsonrpc.o 00:04:16.489 CXX test/cpp_headers/keyring.o 00:04:16.489 LINK pmr_persistence 00:04:16.489 CXX test/cpp_headers/keyring_module.o 00:04:16.489 CXX test/cpp_headers/likely.o 00:04:16.489 CXX test/cpp_headers/log.o 00:04:16.489 CXX test/cpp_headers/lvol.o 00:04:16.489 CXX test/cpp_headers/md5.o 00:04:16.489 CXX test/cpp_headers/memory.o 00:04:16.489 CXX test/cpp_headers/mmio.o 00:04:16.489 LINK abort 00:04:16.489 CXX test/cpp_headers/nbd.o 00:04:16.489 CXX test/cpp_headers/net.o 00:04:16.747 CXX test/cpp_headers/notify.o 00:04:16.747 CXX test/cpp_headers/nvme.o 00:04:16.747 CXX test/cpp_headers/nvme_intel.o 00:04:16.747 CXX test/cpp_headers/nvme_ocssd.o 00:04:16.747 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:16.747 CXX test/cpp_headers/nvme_spec.o 00:04:16.747 CXX test/cpp_headers/nvme_zns.o 00:04:16.747 CXX test/cpp_headers/nvmf_cmd.o 00:04:16.747 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:16.747 LINK cuse 00:04:17.003 CXX test/cpp_headers/nvmf.o 00:04:17.003 CXX test/cpp_headers/nvmf_spec.o 00:04:17.003 CXX test/cpp_headers/nvmf_transport.o 00:04:17.003 CXX test/cpp_headers/opal.o 00:04:17.003 CC examples/nvmf/nvmf/nvmf.o 00:04:17.003 CXX test/cpp_headers/opal_spec.o 00:04:17.003 CXX test/cpp_headers/pci_ids.o 00:04:17.003 CXX test/cpp_headers/pipe.o 00:04:17.003 CXX test/cpp_headers/queue.o 00:04:17.003 CXX test/cpp_headers/reduce.o 00:04:17.260 CXX test/cpp_headers/rpc.o 00:04:17.260 CXX test/cpp_headers/scheduler.o 00:04:17.260 CXX test/cpp_headers/scsi.o 00:04:17.260 CXX test/cpp_headers/scsi_spec.o 00:04:17.260 CXX test/cpp_headers/sock.o 00:04:17.260 CXX test/cpp_headers/stdinc.o 00:04:17.260 CXX test/cpp_headers/string.o 00:04:17.260 CXX test/cpp_headers/thread.o 00:04:17.260 CXX test/cpp_headers/trace.o 00:04:17.260 CXX test/cpp_headers/trace_parser.o 00:04:17.537 CXX test/cpp_headers/tree.o 00:04:17.537 LINK nvmf 00:04:17.537 CXX test/cpp_headers/ublk.o 00:04:17.537 CXX test/cpp_headers/util.o 00:04:17.537 CXX test/cpp_headers/uuid.o 00:04:17.537 CXX test/cpp_headers/version.o 00:04:17.537 CXX test/cpp_headers/vfio_user_pci.o 00:04:17.537 CXX test/cpp_headers/vfio_user_spec.o 00:04:17.537 CXX test/cpp_headers/vhost.o 00:04:17.537 CXX test/cpp_headers/vmd.o 00:04:17.537 CXX test/cpp_headers/xor.o 00:04:17.537 CXX test/cpp_headers/zipf.o 00:04:18.917 LINK esnap 00:04:19.175 00:04:19.175 real 1m41.589s 00:04:19.175 user 9m43.794s 00:04:19.175 sys 1m44.632s 00:04:19.175 17:04:01 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:19.175 17:04:01 make -- common/autotest_common.sh@10 -- $ set +x 00:04:19.175 ************************************ 00:04:19.175 END TEST make 00:04:19.175 ************************************ 00:04:19.175 17:04:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:19.175 17:04:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:19.175 17:04:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:19.175 17:04:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.175 17:04:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:19.175 17:04:01 -- pm/common@44 -- $ pid=5310 00:04:19.175 17:04:01 -- pm/common@50 -- $ kill -TERM 5310 00:04:19.175 17:04:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.175 17:04:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:19.175 17:04:01 -- pm/common@44 -- $ pid=5312 00:04:19.175 17:04:01 -- pm/common@50 -- $ kill -TERM 5312 00:04:19.175 17:04:01 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:19.175 17:04:01 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:19.434 17:04:01 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:19.434 17:04:01 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:19.434 17:04:01 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:19.434 17:04:01 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:19.434 17:04:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:19.434 17:04:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:19.434 17:04:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:19.434 17:04:01 -- scripts/common.sh@336 -- # IFS=.-: 00:04:19.434 17:04:01 -- scripts/common.sh@336 -- # read -ra ver1 00:04:19.434 17:04:01 -- scripts/common.sh@337 -- # IFS=.-: 00:04:19.434 17:04:01 -- scripts/common.sh@337 -- # read -ra ver2 00:04:19.434 17:04:01 -- scripts/common.sh@338 -- # local 'op=<' 00:04:19.434 17:04:01 -- scripts/common.sh@340 -- # ver1_l=2 00:04:19.434 17:04:01 -- scripts/common.sh@341 -- # ver2_l=1 00:04:19.434 17:04:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:19.434 17:04:01 -- scripts/common.sh@344 -- # case "$op" in 00:04:19.434 17:04:01 -- scripts/common.sh@345 -- # : 1 00:04:19.434 17:04:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:19.434 17:04:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:19.434 17:04:01 -- scripts/common.sh@365 -- # decimal 1 00:04:19.434 17:04:01 -- scripts/common.sh@353 -- # local d=1 00:04:19.434 17:04:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:19.434 17:04:01 -- scripts/common.sh@355 -- # echo 1 00:04:19.434 17:04:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:19.434 17:04:01 -- scripts/common.sh@366 -- # decimal 2 00:04:19.434 17:04:01 -- scripts/common.sh@353 -- # local d=2 00:04:19.434 17:04:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:19.434 17:04:01 -- scripts/common.sh@355 -- # echo 2 00:04:19.434 17:04:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:19.434 17:04:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:19.434 17:04:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:19.434 17:04:01 -- scripts/common.sh@368 -- # return 0 00:04:19.434 17:04:01 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:19.434 17:04:01 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.434 --rc genhtml_branch_coverage=1 00:04:19.434 --rc genhtml_function_coverage=1 00:04:19.434 --rc genhtml_legend=1 00:04:19.434 --rc geninfo_all_blocks=1 00:04:19.434 --rc geninfo_unexecuted_blocks=1 00:04:19.434 00:04:19.434 ' 00:04:19.434 17:04:01 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.434 --rc genhtml_branch_coverage=1 00:04:19.434 --rc genhtml_function_coverage=1 00:04:19.434 --rc genhtml_legend=1 00:04:19.434 --rc geninfo_all_blocks=1 00:04:19.434 --rc geninfo_unexecuted_blocks=1 00:04:19.434 00:04:19.434 ' 00:04:19.434 17:04:01 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.434 --rc genhtml_branch_coverage=1 00:04:19.434 --rc genhtml_function_coverage=1 00:04:19.434 --rc genhtml_legend=1 00:04:19.434 --rc geninfo_all_blocks=1 00:04:19.434 --rc geninfo_unexecuted_blocks=1 00:04:19.434 00:04:19.434 ' 00:04:19.434 17:04:01 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:19.434 --rc genhtml_branch_coverage=1 00:04:19.434 --rc genhtml_function_coverage=1 00:04:19.434 --rc genhtml_legend=1 00:04:19.434 --rc geninfo_all_blocks=1 00:04:19.434 --rc geninfo_unexecuted_blocks=1 00:04:19.434 00:04:19.434 ' 00:04:19.434 17:04:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:19.434 17:04:01 -- nvmf/common.sh@7 -- # uname -s 00:04:19.434 17:04:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:19.434 17:04:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:19.434 17:04:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:19.434 17:04:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:19.434 17:04:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:19.434 17:04:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:19.434 17:04:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:19.434 17:04:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:19.434 17:04:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:19.434 17:04:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:19.434 17:04:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:04:19.434 17:04:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:04:19.434 17:04:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:19.434 17:04:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:19.434 17:04:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:19.434 17:04:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:19.434 17:04:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:19.434 17:04:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:19.434 17:04:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:19.434 17:04:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:19.434 17:04:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:19.434 17:04:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.434 17:04:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.435 17:04:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.435 17:04:01 -- paths/export.sh@5 -- # export PATH 00:04:19.435 17:04:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:19.435 17:04:01 -- nvmf/common.sh@51 -- # : 0 00:04:19.435 17:04:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:19.435 17:04:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:19.435 17:04:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:19.435 17:04:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:19.435 17:04:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:19.435 17:04:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:19.435 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:19.435 17:04:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:19.435 17:04:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:19.435 17:04:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:19.435 17:04:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:19.435 17:04:01 -- spdk/autotest.sh@32 -- # uname -s 00:04:19.435 17:04:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:19.435 17:04:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:19.435 17:04:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:19.435 17:04:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:19.435 17:04:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:19.435 17:04:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:19.435 17:04:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:19.435 17:04:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:19.435 17:04:01 -- spdk/autotest.sh@48 -- # udevadm_pid=56195 00:04:19.435 17:04:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:19.435 17:04:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:19.435 17:04:01 -- pm/common@17 -- # local monitor 00:04:19.435 17:04:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.435 17:04:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:19.435 17:04:01 -- pm/common@21 -- # date +%s 00:04:19.435 17:04:01 -- pm/common@25 -- # sleep 1 00:04:19.435 17:04:01 -- pm/common@21 -- # date +%s 00:04:19.435 17:04:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733504641 00:04:19.435 17:04:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733504641 00:04:19.435 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733504641_collect-vmstat.pm.log 00:04:19.435 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733504641_collect-cpu-load.pm.log 00:04:20.371 17:04:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:20.371 17:04:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:20.371 17:04:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.371 17:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:20.371 17:04:02 -- spdk/autotest.sh@59 -- # create_test_list 00:04:20.371 17:04:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:20.630 17:04:02 -- common/autotest_common.sh@10 -- # set +x 00:04:20.630 17:04:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:20.630 17:04:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:20.630 17:04:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:20.630 17:04:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:20.630 17:04:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:20.630 17:04:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:20.630 17:04:02 -- common/autotest_common.sh@1457 -- # uname 00:04:20.630 17:04:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:20.630 17:04:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:20.630 17:04:02 -- common/autotest_common.sh@1477 -- # uname 00:04:20.630 17:04:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:20.630 17:04:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:20.630 17:04:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:20.630 lcov: LCOV version 1.15 00:04:20.630 17:04:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:38.714 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:38.714 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.644 17:04:39 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:00.644 17:04:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.644 17:04:39 -- common/autotest_common.sh@10 -- # set +x 00:05:00.644 17:04:39 -- spdk/autotest.sh@78 -- # rm -f 00:05:00.644 17:04:39 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.644 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.644 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:00.644 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:00.644 17:04:40 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:00.644 17:04:40 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:00.644 17:04:40 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:00.644 17:04:40 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:00.644 17:04:40 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:00.644 17:04:40 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:00.644 17:04:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:00.644 17:04:40 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:00.644 17:04:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.644 17:04:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:00.644 17:04:40 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:00.644 17:04:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.644 17:04:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.644 17:04:40 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:00.644 17:04:40 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:00.644 17:04:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.644 17:04:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:00.644 17:04:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:00.644 17:04:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:00.644 17:04:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.644 17:04:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.644 17:04:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:05:00.644 17:04:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:00.644 17:04:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:00.644 17:04:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.644 17:04:40 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:00.644 17:04:40 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:05:00.644 17:04:40 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:00.644 17:04:40 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:00.644 17:04:40 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.644 17:04:40 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:00.644 17:04:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.644 17:04:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.645 17:04:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:00.645 17:04:40 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:00.645 17:04:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:00.645 No valid GPT data, bailing 00:05:00.645 17:04:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.645 17:04:40 -- scripts/common.sh@394 -- # pt= 00:05:00.645 17:04:40 -- scripts/common.sh@395 -- # return 1 00:05:00.645 17:04:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:00.645 1+0 records in 00:05:00.645 1+0 records out 00:05:00.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457282 s, 229 MB/s 00:05:00.645 17:04:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.645 17:04:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.645 17:04:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:00.645 17:04:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:00.645 17:04:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:00.645 No valid GPT data, bailing 00:05:00.645 17:04:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:00.645 17:04:40 -- scripts/common.sh@394 -- # pt= 00:05:00.645 17:04:40 -- scripts/common.sh@395 -- # return 1 00:05:00.645 17:04:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:00.645 1+0 records in 00:05:00.645 1+0 records out 00:05:00.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463121 s, 226 MB/s 00:05:00.645 17:04:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.645 17:04:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.645 17:04:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:00.645 17:04:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:00.645 17:04:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:00.645 No valid GPT data, bailing 00:05:00.645 17:04:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:00.645 17:04:40 -- scripts/common.sh@394 -- # pt= 00:05:00.645 17:04:40 -- scripts/common.sh@395 -- # return 1 00:05:00.645 17:04:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:00.645 1+0 records in 00:05:00.645 1+0 records out 00:05:00.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449028 s, 234 MB/s 00:05:00.645 17:04:40 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.645 17:04:40 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.645 17:04:40 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:00.645 17:04:40 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:00.645 17:04:40 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:00.645 No valid GPT data, bailing 00:05:00.645 17:04:40 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:00.645 17:04:40 -- scripts/common.sh@394 -- # pt= 00:05:00.645 17:04:40 -- scripts/common.sh@395 -- # return 1 00:05:00.645 17:04:40 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:00.645 1+0 records in 00:05:00.645 1+0 records out 00:05:00.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00413772 s, 253 MB/s 00:05:00.645 17:04:40 -- spdk/autotest.sh@105 -- # sync 00:05:00.645 17:04:40 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:00.645 17:04:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:00.645 17:04:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:00.645 17:04:42 -- spdk/autotest.sh@111 -- # uname -s 00:05:00.645 17:04:42 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:00.645 17:04:42 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:00.645 17:04:42 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:00.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.903 Hugepages 00:05:00.903 node hugesize free / total 00:05:00.903 node0 1048576kB 0 / 0 00:05:00.903 node0 2048kB 0 / 0 00:05:00.903 00:05:00.903 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:00.903 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:01.162 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:01.162 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:01.162 17:04:43 -- spdk/autotest.sh@117 -- # uname -s 00:05:01.162 17:04:43 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:01.162 17:04:43 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:01.162 17:04:43 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:01.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.988 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.988 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:01.988 17:04:44 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:02.923 17:04:45 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:02.923 17:04:45 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:02.923 17:04:45 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:02.923 17:04:45 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:02.923 17:04:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:02.923 17:04:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:02.923 17:04:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:02.923 17:04:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:02.923 17:04:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:03.180 17:04:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:03.180 17:04:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:03.180 17:04:45 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:03.437 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:03.437 Waiting for block devices as requested 00:05:03.437 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:03.437 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:03.695 17:04:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:03.695 17:04:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:03.695 17:04:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:03.695 17:04:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:03.695 17:04:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:03.695 17:04:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:03.695 17:04:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:03.695 17:04:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:03.695 17:04:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:03.695 17:04:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:03.695 17:04:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:03.695 17:04:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:03.695 17:04:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:03.695 17:04:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:03.695 17:04:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:03.695 17:04:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:03.695 17:04:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:03.695 17:04:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:03.695 17:04:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:03.695 17:04:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:03.695 17:04:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:03.695 17:04:45 -- common/autotest_common.sh@1543 -- # continue 00:05:03.695 17:04:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:03.695 17:04:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:03.695 17:04:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:03.695 17:04:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:03.695 17:04:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:03.695 17:04:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:03.695 17:04:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:03.695 17:04:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:03.695 17:04:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:03.695 17:04:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:03.695 17:04:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:03.695 17:04:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:03.695 17:04:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:03.695 17:04:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:03.695 17:04:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:03.695 17:04:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:03.695 17:04:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:03.695 17:04:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:03.695 17:04:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:03.695 17:04:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:03.695 17:04:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:03.695 17:04:45 -- common/autotest_common.sh@1543 -- # continue 00:05:03.695 17:04:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:03.695 17:04:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.695 17:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:03.695 17:04:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:03.695 17:04:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.695 17:04:45 -- common/autotest_common.sh@10 -- # set +x 00:05:03.695 17:04:45 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:04.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.647 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.647 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:04.647 17:04:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:04.647 17:04:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.647 17:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:04.647 17:04:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:04.647 17:04:46 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:04.647 17:04:46 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:04.647 17:04:46 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:04.647 17:04:46 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:04.647 17:04:46 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:04.647 17:04:46 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:04.647 17:04:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:04.647 17:04:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:04.647 17:04:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:04.647 17:04:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:04.647 17:04:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:04.647 17:04:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:04.647 17:04:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:04.647 17:04:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:04.647 17:04:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:04.647 17:04:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:04.647 17:04:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:04.647 17:04:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:04.647 17:04:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:04.647 17:04:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:04.647 17:04:46 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:04.647 17:04:46 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:04.647 17:04:46 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:04.647 17:04:46 -- common/autotest_common.sh@1572 -- # return 0 00:05:04.647 17:04:46 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:04.647 17:04:46 -- common/autotest_common.sh@1580 -- # return 0 00:05:04.647 17:04:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:04.647 17:04:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:04.647 17:04:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:04.647 17:04:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:04.647 17:04:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:04.647 17:04:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.647 17:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:04.647 17:04:46 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:04.647 17:04:46 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:04.647 17:04:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.647 17:04:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.647 17:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:04.936 ************************************ 00:05:04.936 START TEST env 00:05:04.936 ************************************ 00:05:04.936 17:04:46 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:04.936 * Looking for test storage... 00:05:04.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.936 17:04:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.936 17:04:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.936 17:04:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.936 17:04:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.936 17:04:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.936 17:04:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.936 17:04:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.936 17:04:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.936 17:04:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.936 17:04:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.936 17:04:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.936 17:04:47 env -- scripts/common.sh@344 -- # case "$op" in 00:05:04.936 17:04:47 env -- scripts/common.sh@345 -- # : 1 00:05:04.936 17:04:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.936 17:04:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.936 17:04:47 env -- scripts/common.sh@365 -- # decimal 1 00:05:04.936 17:04:47 env -- scripts/common.sh@353 -- # local d=1 00:05:04.936 17:04:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.936 17:04:47 env -- scripts/common.sh@355 -- # echo 1 00:05:04.936 17:04:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.936 17:04:47 env -- scripts/common.sh@366 -- # decimal 2 00:05:04.936 17:04:47 env -- scripts/common.sh@353 -- # local d=2 00:05:04.936 17:04:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.936 17:04:47 env -- scripts/common.sh@355 -- # echo 2 00:05:04.936 17:04:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.936 17:04:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.936 17:04:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.936 17:04:47 env -- scripts/common.sh@368 -- # return 0 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.936 --rc genhtml_branch_coverage=1 00:05:04.936 --rc genhtml_function_coverage=1 00:05:04.936 --rc genhtml_legend=1 00:05:04.936 --rc geninfo_all_blocks=1 00:05:04.936 --rc geninfo_unexecuted_blocks=1 00:05:04.936 00:05:04.936 ' 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.936 --rc genhtml_branch_coverage=1 00:05:04.936 --rc genhtml_function_coverage=1 00:05:04.936 --rc genhtml_legend=1 00:05:04.936 --rc geninfo_all_blocks=1 00:05:04.936 --rc geninfo_unexecuted_blocks=1 00:05:04.936 00:05:04.936 ' 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.936 --rc genhtml_branch_coverage=1 00:05:04.936 --rc genhtml_function_coverage=1 00:05:04.936 --rc genhtml_legend=1 00:05:04.936 --rc geninfo_all_blocks=1 00:05:04.936 --rc geninfo_unexecuted_blocks=1 00:05:04.936 00:05:04.936 ' 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.936 --rc genhtml_branch_coverage=1 00:05:04.936 --rc genhtml_function_coverage=1 00:05:04.936 --rc genhtml_legend=1 00:05:04.936 --rc geninfo_all_blocks=1 00:05:04.936 --rc geninfo_unexecuted_blocks=1 00:05:04.936 00:05:04.936 ' 00:05:04.936 17:04:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.936 17:04:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.936 17:04:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.936 ************************************ 00:05:04.936 START TEST env_memory 00:05:04.936 ************************************ 00:05:04.936 17:04:47 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:04.936 00:05:04.936 00:05:04.936 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.936 http://cunit.sourceforge.net/ 00:05:04.936 00:05:04.936 00:05:04.936 Suite: memory 00:05:04.936 Test: alloc and free memory map ...[2024-12-06 17:04:47.185228] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:04.936 passed 00:05:04.936 Test: mem map translation ...[2024-12-06 17:04:47.217134] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:04.936 [2024-12-06 17:04:47.217769] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:04.936 [2024-12-06 17:04:47.218410] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:04.936 [2024-12-06 17:04:47.218435] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.196 passed 00:05:05.196 Test: mem map registration ...[2024-12-06 17:04:47.286810] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:05.196 [2024-12-06 17:04:47.286877] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:05.196 passed 00:05:05.196 Test: mem map adjacent registrations ...passed 00:05:05.196 00:05:05.196 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.196 suites 1 1 n/a 0 0 00:05:05.196 tests 4 4 4 0 0 00:05:05.196 asserts 152 152 152 0 n/a 00:05:05.196 00:05:05.196 Elapsed time = 0.217 seconds 00:05:05.196 00:05:05.196 real 0m0.235s 00:05:05.196 user 0m0.219s 00:05:05.196 sys 0m0.010s 00:05:05.196 17:04:47 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.196 ************************************ 00:05:05.196 END TEST env_memory 00:05:05.196 ************************************ 00:05:05.196 17:04:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.196 17:04:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.196 17:04:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.196 17:04:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.196 17:04:47 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.196 ************************************ 00:05:05.196 START TEST env_vtophys 00:05:05.196 ************************************ 00:05:05.196 17:04:47 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:05.196 EAL: lib.eal log level changed from notice to debug 00:05:05.196 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 1 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 2 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 3 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 4 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 5 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 6 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 7 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 8 as core 0 on socket 0 00:05:05.196 EAL: Detected lcore 9 as core 0 on socket 0 00:05:05.196 EAL: Maximum logical cores by configuration: 128 00:05:05.196 EAL: Detected CPU lcores: 10 00:05:05.196 EAL: Detected NUMA nodes: 1 00:05:05.196 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:05.196 EAL: Detected shared linkage of DPDK 00:05:05.196 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.196 EAL: Selected IOVA mode 'PA' 00:05:05.196 EAL: Probing VFIO support... 00:05:05.196 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.196 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:05.196 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.196 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.196 EAL: Setting up physically contiguous memory... 00:05:05.196 EAL: Setting maximum number of open files to 524288 00:05:05.196 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.196 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.196 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.196 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.196 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.196 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.196 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.196 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.196 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.196 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.196 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.196 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.196 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.196 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.196 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.196 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.196 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.196 EAL: Hugepages will be freed exactly as allocated. 00:05:05.196 EAL: No shared files mode enabled, IPC is disabled 00:05:05.196 EAL: No shared files mode enabled, IPC is disabled 00:05:05.455 EAL: TSC frequency is ~2200000 KHz 00:05:05.455 EAL: Main lcore 0 is ready (tid=7fce428e0a00;cpuset=[0]) 00:05:05.455 EAL: Trying to obtain current memory policy. 00:05:05.455 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.455 EAL: Restoring previous memory policy: 0 00:05:05.455 EAL: request: mp_malloc_sync 00:05:05.455 EAL: No shared files mode enabled, IPC is disabled 00:05:05.455 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.455 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:05.455 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.455 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.455 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:05.455 00:05:05.455 00:05:05.455 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.455 http://cunit.sourceforge.net/ 00:05:05.455 00:05:05.455 00:05:05.455 Suite: components_suite 00:05:05.455 Test: vtophys_malloc_test ...passed 00:05:05.455 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.455 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.455 EAL: Restoring previous memory policy: 4 00:05:05.455 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.455 EAL: request: mp_malloc_sync 00:05:05.455 EAL: No shared files mode enabled, IPC is disabled 00:05:05.455 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.455 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.455 EAL: request: mp_malloc_sync 00:05:05.455 EAL: No shared files mode enabled, IPC is disabled 00:05:05.455 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.455 EAL: Trying to obtain current memory policy. 00:05:05.455 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.455 EAL: Restoring previous memory policy: 4 00:05:05.455 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.456 EAL: Trying to obtain current memory policy. 00:05:05.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.456 EAL: Restoring previous memory policy: 4 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.456 EAL: Trying to obtain current memory policy. 00:05:05.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.456 EAL: Restoring previous memory policy: 4 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.456 EAL: Trying to obtain current memory policy. 00:05:05.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.456 EAL: Restoring previous memory policy: 4 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.456 EAL: Trying to obtain current memory policy. 00:05:05.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.456 EAL: Restoring previous memory policy: 4 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.456 EAL: Trying to obtain current memory policy. 00:05:05.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.456 EAL: Restoring previous memory policy: 4 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was expanded by 130MB 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.456 EAL: Trying to obtain current memory policy. 00:05:05.456 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.456 EAL: Restoring previous memory policy: 4 00:05:05.456 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.456 EAL: request: mp_malloc_sync 00:05:05.456 EAL: No shared files mode enabled, IPC is disabled 00:05:05.456 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.714 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.714 EAL: request: mp_malloc_sync 00:05:05.714 EAL: No shared files mode enabled, IPC is disabled 00:05:05.714 EAL: Heap on socket 0 was shrunk by 258MB 00:05:05.714 EAL: Trying to obtain current memory policy. 00:05:05.714 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.714 EAL: Restoring previous memory policy: 4 00:05:05.714 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.714 EAL: request: mp_malloc_sync 00:05:05.714 EAL: No shared files mode enabled, IPC is disabled 00:05:05.714 EAL: Heap on socket 0 was expanded by 514MB 00:05:05.714 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.714 EAL: request: mp_malloc_sync 00:05:05.714 EAL: No shared files mode enabled, IPC is disabled 00:05:05.714 EAL: Heap on socket 0 was shrunk by 514MB 00:05:05.714 EAL: Trying to obtain current memory policy. 00:05:05.714 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.973 EAL: Restoring previous memory policy: 4 00:05:05.973 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.973 EAL: request: mp_malloc_sync 00:05:05.973 EAL: No shared files mode enabled, IPC is disabled 00:05:05.973 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.973 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.232 passed 00:05:06.232 00:05:06.232 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.232 suites 1 1 n/a 0 0 00:05:06.232 tests 2 2 2 0 0 00:05:06.232 asserts 5463 5463 5463 0 n/a 00:05:06.232 00:05:06.232 Elapsed time = 0.716 seconds 00:05:06.232 EAL: request: mp_malloc_sync 00:05:06.232 EAL: No shared files mode enabled, IPC is disabled 00:05:06.232 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.232 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.232 EAL: request: mp_malloc_sync 00:05:06.232 EAL: No shared files mode enabled, IPC is disabled 00:05:06.232 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.232 EAL: No shared files mode enabled, IPC is disabled 00:05:06.232 EAL: No shared files mode enabled, IPC is disabled 00:05:06.232 EAL: No shared files mode enabled, IPC is disabled 00:05:06.232 00:05:06.232 real 0m0.932s 00:05:06.232 user 0m0.485s 00:05:06.232 sys 0m0.309s 00:05:06.232 17:04:48 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.232 ************************************ 00:05:06.232 END TEST env_vtophys 00:05:06.232 ************************************ 00:05:06.232 17:04:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.232 17:04:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.232 17:04:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.232 17:04:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.232 17:04:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.232 ************************************ 00:05:06.232 START TEST env_pci 00:05:06.232 ************************************ 00:05:06.232 17:04:48 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.232 00:05:06.232 00:05:06.232 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.232 http://cunit.sourceforge.net/ 00:05:06.232 00:05:06.232 00:05:06.232 Suite: pci 00:05:06.232 Test: pci_hook ...[2024-12-06 17:04:48.417784] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58489 has claimed it 00:05:06.232 passed 00:05:06.232 00:05:06.232 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.232 suites 1 1 n/a 0 0 00:05:06.232 tests 1 1 1 0 0 00:05:06.232 asserts 25 25 25 0 n/a 00:05:06.232 00:05:06.232 Elapsed time = 0.002 seconds 00:05:06.232 EAL: Cannot find device (10000:00:01.0) 00:05:06.232 EAL: Failed to attach device on primary process 00:05:06.232 00:05:06.232 real 0m0.018s 00:05:06.232 user 0m0.005s 00:05:06.232 sys 0m0.012s 00:05:06.232 17:04:48 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.232 17:04:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:06.232 ************************************ 00:05:06.232 END TEST env_pci 00:05:06.232 ************************************ 00:05:06.232 17:04:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.232 17:04:48 env -- env/env.sh@15 -- # uname 00:05:06.232 17:04:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.232 17:04:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.232 17:04:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.233 17:04:48 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:06.233 17:04:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.233 17:04:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.233 ************************************ 00:05:06.233 START TEST env_dpdk_post_init 00:05:06.233 ************************************ 00:05:06.233 17:04:48 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.233 EAL: Detected CPU lcores: 10 00:05:06.233 EAL: Detected NUMA nodes: 1 00:05:06.233 EAL: Detected shared linkage of DPDK 00:05:06.233 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.233 EAL: Selected IOVA mode 'PA' 00:05:06.492 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.492 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:06.492 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:06.492 Starting DPDK initialization... 00:05:06.492 Starting SPDK post initialization... 00:05:06.492 SPDK NVMe probe 00:05:06.492 Attaching to 0000:00:10.0 00:05:06.492 Attaching to 0000:00:11.0 00:05:06.492 Attached to 0000:00:10.0 00:05:06.492 Attached to 0000:00:11.0 00:05:06.492 Cleaning up... 00:05:06.492 00:05:06.492 real 0m0.193s 00:05:06.492 user 0m0.058s 00:05:06.492 sys 0m0.035s 00:05:06.492 17:04:48 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.492 17:04:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.492 ************************************ 00:05:06.492 END TEST env_dpdk_post_init 00:05:06.492 ************************************ 00:05:06.492 17:04:48 env -- env/env.sh@26 -- # uname 00:05:06.492 17:04:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:06.492 17:04:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.492 17:04:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.492 17:04:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.492 17:04:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.492 ************************************ 00:05:06.492 START TEST env_mem_callbacks 00:05:06.492 ************************************ 00:05:06.492 17:04:48 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.492 EAL: Detected CPU lcores: 10 00:05:06.492 EAL: Detected NUMA nodes: 1 00:05:06.492 EAL: Detected shared linkage of DPDK 00:05:06.492 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.492 EAL: Selected IOVA mode 'PA' 00:05:06.751 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.751 00:05:06.751 00:05:06.751 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.751 http://cunit.sourceforge.net/ 00:05:06.751 00:05:06.751 00:05:06.751 Suite: memory 00:05:06.751 Test: test ... 00:05:06.751 register 0x200000200000 2097152 00:05:06.751 malloc 3145728 00:05:06.751 register 0x200000400000 4194304 00:05:06.751 buf 0x200000500000 len 3145728 PASSED 00:05:06.751 malloc 64 00:05:06.751 buf 0x2000004fff40 len 64 PASSED 00:05:06.751 malloc 4194304 00:05:06.751 register 0x200000800000 6291456 00:05:06.751 buf 0x200000a00000 len 4194304 PASSED 00:05:06.751 free 0x200000500000 3145728 00:05:06.751 free 0x2000004fff40 64 00:05:06.751 unregister 0x200000400000 4194304 PASSED 00:05:06.751 free 0x200000a00000 4194304 00:05:06.751 unregister 0x200000800000 6291456 PASSED 00:05:06.751 malloc 8388608 00:05:06.751 register 0x200000400000 10485760 00:05:06.751 buf 0x200000600000 len 8388608 PASSED 00:05:06.751 free 0x200000600000 8388608 00:05:06.751 unregister 0x200000400000 10485760 PASSED 00:05:06.751 passed 00:05:06.751 00:05:06.751 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.751 suites 1 1 n/a 0 0 00:05:06.751 tests 1 1 1 0 0 00:05:06.751 asserts 15 15 15 0 n/a 00:05:06.751 00:05:06.751 Elapsed time = 0.007 seconds 00:05:06.751 00:05:06.751 real 0m0.144s 00:05:06.751 user 0m0.018s 00:05:06.751 sys 0m0.023s 00:05:06.751 17:04:48 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.751 17:04:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:06.751 ************************************ 00:05:06.751 END TEST env_mem_callbacks 00:05:06.751 ************************************ 00:05:06.751 00:05:06.751 real 0m1.987s 00:05:06.751 user 0m0.993s 00:05:06.751 sys 0m0.629s 00:05:06.751 ************************************ 00:05:06.751 END TEST env 00:05:06.751 ************************************ 00:05:06.751 17:04:48 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.751 17:04:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.751 17:04:48 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:06.751 17:04:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.751 17:04:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.751 17:04:48 -- common/autotest_common.sh@10 -- # set +x 00:05:06.751 ************************************ 00:05:06.751 START TEST rpc 00:05:06.751 ************************************ 00:05:06.751 17:04:48 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:06.751 * Looking for test storage... 00:05:07.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.009 17:04:49 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.009 17:04:49 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.009 17:04:49 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.009 17:04:49 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.009 17:04:49 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.009 17:04:49 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.009 17:04:49 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.009 17:04:49 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.009 17:04:49 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.009 17:04:49 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.009 17:04:49 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.009 17:04:49 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.009 17:04:49 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.009 17:04:49 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.009 17:04:49 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.009 17:04:49 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.010 17:04:49 rpc -- scripts/common.sh@345 -- # : 1 00:05:07.010 17:04:49 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.010 17:04:49 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.010 17:04:49 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.010 17:04:49 rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.010 17:04:49 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.010 17:04:49 rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.010 17:04:49 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.010 17:04:49 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.010 17:04:49 rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.010 17:04:49 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.010 17:04:49 rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.010 17:04:49 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.010 17:04:49 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.010 17:04:49 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.010 17:04:49 rpc -- scripts/common.sh@368 -- # return 0 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.010 --rc genhtml_branch_coverage=1 00:05:07.010 --rc genhtml_function_coverage=1 00:05:07.010 --rc genhtml_legend=1 00:05:07.010 --rc geninfo_all_blocks=1 00:05:07.010 --rc geninfo_unexecuted_blocks=1 00:05:07.010 00:05:07.010 ' 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.010 --rc genhtml_branch_coverage=1 00:05:07.010 --rc genhtml_function_coverage=1 00:05:07.010 --rc genhtml_legend=1 00:05:07.010 --rc geninfo_all_blocks=1 00:05:07.010 --rc geninfo_unexecuted_blocks=1 00:05:07.010 00:05:07.010 ' 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.010 --rc genhtml_branch_coverage=1 00:05:07.010 --rc genhtml_function_coverage=1 00:05:07.010 --rc genhtml_legend=1 00:05:07.010 --rc geninfo_all_blocks=1 00:05:07.010 --rc geninfo_unexecuted_blocks=1 00:05:07.010 00:05:07.010 ' 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.010 --rc genhtml_branch_coverage=1 00:05:07.010 --rc genhtml_function_coverage=1 00:05:07.010 --rc genhtml_legend=1 00:05:07.010 --rc geninfo_all_blocks=1 00:05:07.010 --rc geninfo_unexecuted_blocks=1 00:05:07.010 00:05:07.010 ' 00:05:07.010 17:04:49 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58612 00:05:07.010 17:04:49 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.010 17:04:49 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:07.010 17:04:49 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58612 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@835 -- # '[' -z 58612 ']' 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.010 17:04:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.010 [2024-12-06 17:04:49.235513] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:07.010 [2024-12-06 17:04:49.235833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58612 ] 00:05:07.268 [2024-12-06 17:04:49.390056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.268 [2024-12-06 17:04:49.431451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:07.268 [2024-12-06 17:04:49.431716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58612' to capture a snapshot of events at runtime. 00:05:07.268 [2024-12-06 17:04:49.431891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:07.268 [2024-12-06 17:04:49.432039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:07.268 [2024-12-06 17:04:49.432081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58612 for offline analysis/debug. 00:05:07.268 [2024-12-06 17:04:49.432628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.527 17:04:49 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.527 17:04:49 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.527 17:04:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.527 17:04:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.527 17:04:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:07.527 17:04:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:07.527 17:04:49 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.527 17:04:49 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.527 17:04:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.527 ************************************ 00:05:07.527 START TEST rpc_integrity 00:05:07.527 ************************************ 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:07.527 { 00:05:07.527 "aliases": [ 00:05:07.527 "457d2ef4-f056-4821-b4bc-b4b9b50f587b" 00:05:07.527 ], 00:05:07.527 "assigned_rate_limits": { 00:05:07.527 "r_mbytes_per_sec": 0, 00:05:07.527 "rw_ios_per_sec": 0, 00:05:07.527 "rw_mbytes_per_sec": 0, 00:05:07.527 "w_mbytes_per_sec": 0 00:05:07.527 }, 00:05:07.527 "block_size": 512, 00:05:07.527 "claimed": false, 00:05:07.527 "driver_specific": {}, 00:05:07.527 "memory_domains": [ 00:05:07.527 { 00:05:07.527 "dma_device_id": "system", 00:05:07.527 "dma_device_type": 1 00:05:07.527 }, 00:05:07.527 { 00:05:07.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.527 "dma_device_type": 2 00:05:07.527 } 00:05:07.527 ], 00:05:07.527 "name": "Malloc0", 00:05:07.527 "num_blocks": 16384, 00:05:07.527 "product_name": "Malloc disk", 00:05:07.527 "supported_io_types": { 00:05:07.527 "abort": true, 00:05:07.527 "compare": false, 00:05:07.527 "compare_and_write": false, 00:05:07.527 "copy": true, 00:05:07.527 "flush": true, 00:05:07.527 "get_zone_info": false, 00:05:07.527 "nvme_admin": false, 00:05:07.527 "nvme_io": false, 00:05:07.527 "nvme_io_md": false, 00:05:07.527 "nvme_iov_md": false, 00:05:07.527 "read": true, 00:05:07.527 "reset": true, 00:05:07.527 "seek_data": false, 00:05:07.527 "seek_hole": false, 00:05:07.527 "unmap": true, 00:05:07.527 "write": true, 00:05:07.527 "write_zeroes": true, 00:05:07.527 "zcopy": true, 00:05:07.527 "zone_append": false, 00:05:07.527 "zone_management": false 00:05:07.527 }, 00:05:07.527 "uuid": "457d2ef4-f056-4821-b4bc-b4b9b50f587b", 00:05:07.527 "zoned": false 00:05:07.527 } 00:05:07.527 ]' 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:07.527 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.527 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.527 [2024-12-06 17:04:49.822982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:07.527 [2024-12-06 17:04:49.823195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:07.527 [2024-12-06 17:04:49.823296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aed3e0 00:05:07.527 [2024-12-06 17:04:49.823316] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:07.786 [2024-12-06 17:04:49.824933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:07.786 [2024-12-06 17:04:49.824971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:07.786 Passthru0 00:05:07.786 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.786 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:07.786 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.786 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.786 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.786 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:07.786 { 00:05:07.786 "aliases": [ 00:05:07.786 "457d2ef4-f056-4821-b4bc-b4b9b50f587b" 00:05:07.786 ], 00:05:07.786 "assigned_rate_limits": { 00:05:07.786 "r_mbytes_per_sec": 0, 00:05:07.786 "rw_ios_per_sec": 0, 00:05:07.786 "rw_mbytes_per_sec": 0, 00:05:07.786 "w_mbytes_per_sec": 0 00:05:07.786 }, 00:05:07.786 "block_size": 512, 00:05:07.786 "claim_type": "exclusive_write", 00:05:07.786 "claimed": true, 00:05:07.786 "driver_specific": {}, 00:05:07.786 "memory_domains": [ 00:05:07.786 { 00:05:07.786 "dma_device_id": "system", 00:05:07.786 "dma_device_type": 1 00:05:07.786 }, 00:05:07.786 { 00:05:07.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.786 "dma_device_type": 2 00:05:07.786 } 00:05:07.786 ], 00:05:07.786 "name": "Malloc0", 00:05:07.786 "num_blocks": 16384, 00:05:07.786 "product_name": "Malloc disk", 00:05:07.786 "supported_io_types": { 00:05:07.786 "abort": true, 00:05:07.786 "compare": false, 00:05:07.786 "compare_and_write": false, 00:05:07.786 "copy": true, 00:05:07.786 "flush": true, 00:05:07.786 "get_zone_info": false, 00:05:07.786 "nvme_admin": false, 00:05:07.786 "nvme_io": false, 00:05:07.786 "nvme_io_md": false, 00:05:07.786 "nvme_iov_md": false, 00:05:07.786 "read": true, 00:05:07.786 "reset": true, 00:05:07.786 "seek_data": false, 00:05:07.786 "seek_hole": false, 00:05:07.786 "unmap": true, 00:05:07.786 "write": true, 00:05:07.786 "write_zeroes": true, 00:05:07.786 "zcopy": true, 00:05:07.786 "zone_append": false, 00:05:07.786 "zone_management": false 00:05:07.786 }, 00:05:07.786 "uuid": "457d2ef4-f056-4821-b4bc-b4b9b50f587b", 00:05:07.786 "zoned": false 00:05:07.786 }, 00:05:07.786 { 00:05:07.786 "aliases": [ 00:05:07.786 "511e5e5a-8461-56f8-9de1-7634348117c4" 00:05:07.786 ], 00:05:07.786 "assigned_rate_limits": { 00:05:07.786 "r_mbytes_per_sec": 0, 00:05:07.786 "rw_ios_per_sec": 0, 00:05:07.786 "rw_mbytes_per_sec": 0, 00:05:07.786 "w_mbytes_per_sec": 0 00:05:07.786 }, 00:05:07.786 "block_size": 512, 00:05:07.786 "claimed": false, 00:05:07.786 "driver_specific": { 00:05:07.786 "passthru": { 00:05:07.786 "base_bdev_name": "Malloc0", 00:05:07.786 "name": "Passthru0" 00:05:07.786 } 00:05:07.786 }, 00:05:07.786 "memory_domains": [ 00:05:07.786 { 00:05:07.786 "dma_device_id": "system", 00:05:07.786 "dma_device_type": 1 00:05:07.786 }, 00:05:07.786 { 00:05:07.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:07.786 "dma_device_type": 2 00:05:07.786 } 00:05:07.786 ], 00:05:07.786 "name": "Passthru0", 00:05:07.786 "num_blocks": 16384, 00:05:07.786 "product_name": "passthru", 00:05:07.786 "supported_io_types": { 00:05:07.786 "abort": true, 00:05:07.786 "compare": false, 00:05:07.786 "compare_and_write": false, 00:05:07.786 "copy": true, 00:05:07.786 "flush": true, 00:05:07.787 "get_zone_info": false, 00:05:07.787 "nvme_admin": false, 00:05:07.787 "nvme_io": false, 00:05:07.787 "nvme_io_md": false, 00:05:07.787 "nvme_iov_md": false, 00:05:07.787 "read": true, 00:05:07.787 "reset": true, 00:05:07.787 "seek_data": false, 00:05:07.787 "seek_hole": false, 00:05:07.787 "unmap": true, 00:05:07.787 "write": true, 00:05:07.787 "write_zeroes": true, 00:05:07.787 "zcopy": true, 00:05:07.787 "zone_append": false, 00:05:07.787 "zone_management": false 00:05:07.787 }, 00:05:07.787 "uuid": "511e5e5a-8461-56f8-9de1-7634348117c4", 00:05:07.787 "zoned": false 00:05:07.787 } 00:05:07.787 ]' 00:05:07.787 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:07.787 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:07.787 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.787 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.787 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.787 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:07.787 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:07.787 ************************************ 00:05:07.787 END TEST rpc_integrity 00:05:07.787 ************************************ 00:05:07.787 17:04:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:07.787 00:05:07.787 real 0m0.357s 00:05:07.787 user 0m0.243s 00:05:07.787 sys 0m0.041s 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.787 17:04:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:07.787 17:04:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:07.787 17:04:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.787 17:04:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.787 17:04:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.787 ************************************ 00:05:07.787 START TEST rpc_plugins 00:05:07.787 ************************************ 00:05:07.787 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:07.787 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:07.787 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.787 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:07.787 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.787 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:07.787 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:07.787 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.787 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.046 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:08.046 { 00:05:08.046 "aliases": [ 00:05:08.046 "ea24c91c-b927-407e-a7de-7fed97cdb1fe" 00:05:08.046 ], 00:05:08.046 "assigned_rate_limits": { 00:05:08.046 "r_mbytes_per_sec": 0, 00:05:08.046 "rw_ios_per_sec": 0, 00:05:08.046 "rw_mbytes_per_sec": 0, 00:05:08.046 "w_mbytes_per_sec": 0 00:05:08.046 }, 00:05:08.046 "block_size": 4096, 00:05:08.046 "claimed": false, 00:05:08.046 "driver_specific": {}, 00:05:08.046 "memory_domains": [ 00:05:08.046 { 00:05:08.046 "dma_device_id": "system", 00:05:08.046 "dma_device_type": 1 00:05:08.046 }, 00:05:08.046 { 00:05:08.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.046 "dma_device_type": 2 00:05:08.046 } 00:05:08.046 ], 00:05:08.046 "name": "Malloc1", 00:05:08.046 "num_blocks": 256, 00:05:08.046 "product_name": "Malloc disk", 00:05:08.046 "supported_io_types": { 00:05:08.046 "abort": true, 00:05:08.046 "compare": false, 00:05:08.046 "compare_and_write": false, 00:05:08.046 "copy": true, 00:05:08.046 "flush": true, 00:05:08.046 "get_zone_info": false, 00:05:08.046 "nvme_admin": false, 00:05:08.046 "nvme_io": false, 00:05:08.046 "nvme_io_md": false, 00:05:08.046 "nvme_iov_md": false, 00:05:08.046 "read": true, 00:05:08.046 "reset": true, 00:05:08.046 "seek_data": false, 00:05:08.046 "seek_hole": false, 00:05:08.046 "unmap": true, 00:05:08.046 "write": true, 00:05:08.046 "write_zeroes": true, 00:05:08.046 "zcopy": true, 00:05:08.046 "zone_append": false, 00:05:08.046 "zone_management": false 00:05:08.046 }, 00:05:08.046 "uuid": "ea24c91c-b927-407e-a7de-7fed97cdb1fe", 00:05:08.046 "zoned": false 00:05:08.046 } 00:05:08.046 ]' 00:05:08.046 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:08.046 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:08.046 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.046 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.046 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:08.046 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:08.046 17:04:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:08.046 00:05:08.046 real 0m0.154s 00:05:08.046 user 0m0.102s 00:05:08.046 sys 0m0.019s 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.046 ************************************ 00:05:08.046 END TEST rpc_plugins 00:05:08.046 17:04:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:08.046 ************************************ 00:05:08.046 17:04:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:08.046 17:04:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.046 17:04:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.046 17:04:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.046 ************************************ 00:05:08.046 START TEST rpc_trace_cmd_test 00:05:08.046 ************************************ 00:05:08.046 17:04:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:08.046 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:08.046 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:08.046 17:04:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.046 17:04:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.046 17:04:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.046 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:08.046 "bdev": { 00:05:08.046 "mask": "0x8", 00:05:08.046 "tpoint_mask": "0xffffffffffffffff" 00:05:08.046 }, 00:05:08.046 "bdev_nvme": { 00:05:08.046 "mask": "0x4000", 00:05:08.046 "tpoint_mask": "0x0" 00:05:08.046 }, 00:05:08.046 "bdev_raid": { 00:05:08.046 "mask": "0x20000", 00:05:08.046 "tpoint_mask": "0x0" 00:05:08.046 }, 00:05:08.046 "blob": { 00:05:08.046 "mask": "0x10000", 00:05:08.046 "tpoint_mask": "0x0" 00:05:08.046 }, 00:05:08.046 "blobfs": { 00:05:08.046 "mask": "0x80", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "dsa": { 00:05:08.047 "mask": "0x200", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "ftl": { 00:05:08.047 "mask": "0x40", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "iaa": { 00:05:08.047 "mask": "0x1000", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "iscsi_conn": { 00:05:08.047 "mask": "0x2", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "nvme_pcie": { 00:05:08.047 "mask": "0x800", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "nvme_tcp": { 00:05:08.047 "mask": "0x2000", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "nvmf_rdma": { 00:05:08.047 "mask": "0x10", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "nvmf_tcp": { 00:05:08.047 "mask": "0x20", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "scheduler": { 00:05:08.047 "mask": "0x40000", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "scsi": { 00:05:08.047 "mask": "0x4", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "sock": { 00:05:08.047 "mask": "0x8000", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "thread": { 00:05:08.047 "mask": "0x400", 00:05:08.047 "tpoint_mask": "0x0" 00:05:08.047 }, 00:05:08.047 "tpoint_group_mask": "0x8", 00:05:08.047 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58612" 00:05:08.047 }' 00:05:08.047 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:08.047 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:08.047 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:08.306 00:05:08.306 real 0m0.261s 00:05:08.306 user 0m0.226s 00:05:08.306 sys 0m0.026s 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.306 17:04:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:08.306 ************************************ 00:05:08.306 END TEST rpc_trace_cmd_test 00:05:08.306 ************************************ 00:05:08.306 17:04:50 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:08.306 17:04:50 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:08.306 17:04:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.306 17:04:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.306 17:04:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.306 ************************************ 00:05:08.306 START TEST go_rpc 00:05:08.306 ************************************ 00:05:08.306 17:04:50 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:05:08.306 17:04:50 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:08.306 17:04:50 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:08.306 17:04:50 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:05:08.564 17:04:50 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:08.564 17:04:50 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.564 17:04:50 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.564 17:04:50 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.564 17:04:50 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.564 17:04:50 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:08.564 17:04:50 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:08.565 17:04:50 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["f834add3-97be-42d6-87ae-fd50d190537f"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"f834add3-97be-42d6-87ae-fd50d190537f","zoned":false}]' 00:05:08.565 17:04:50 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:05:08.565 17:04:50 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:08.565 17:04:50 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:08.565 17:04:50 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.565 17:04:50 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.565 17:04:50 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.565 17:04:50 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:08.565 17:04:50 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:08.565 17:04:50 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:05:08.565 17:04:50 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:08.565 00:05:08.565 real 0m0.224s 00:05:08.565 user 0m0.152s 00:05:08.565 sys 0m0.038s 00:05:08.565 17:04:50 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.565 17:04:50 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.565 ************************************ 00:05:08.565 END TEST go_rpc 00:05:08.565 ************************************ 00:05:08.565 17:04:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:08.565 17:04:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:08.565 17:04:50 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.565 17:04:50 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.565 17:04:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.565 ************************************ 00:05:08.565 START TEST rpc_daemon_integrity 00:05:08.565 ************************************ 00:05:08.565 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:08.565 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.565 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.565 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.565 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.565 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.565 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.824 { 00:05:08.824 "aliases": [ 00:05:08.824 "9955dff5-60cc-438b-afa1-520b64a91406" 00:05:08.824 ], 00:05:08.824 "assigned_rate_limits": { 00:05:08.824 "r_mbytes_per_sec": 0, 00:05:08.824 "rw_ios_per_sec": 0, 00:05:08.824 "rw_mbytes_per_sec": 0, 00:05:08.824 "w_mbytes_per_sec": 0 00:05:08.824 }, 00:05:08.824 "block_size": 512, 00:05:08.824 "claimed": false, 00:05:08.824 "driver_specific": {}, 00:05:08.824 "memory_domains": [ 00:05:08.824 { 00:05:08.824 "dma_device_id": "system", 00:05:08.824 "dma_device_type": 1 00:05:08.824 }, 00:05:08.824 { 00:05:08.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.824 "dma_device_type": 2 00:05:08.824 } 00:05:08.824 ], 00:05:08.824 "name": "Malloc3", 00:05:08.824 "num_blocks": 16384, 00:05:08.824 "product_name": "Malloc disk", 00:05:08.824 "supported_io_types": { 00:05:08.824 "abort": true, 00:05:08.824 "compare": false, 00:05:08.824 "compare_and_write": false, 00:05:08.824 "copy": true, 00:05:08.824 "flush": true, 00:05:08.824 "get_zone_info": false, 00:05:08.824 "nvme_admin": false, 00:05:08.824 "nvme_io": false, 00:05:08.824 "nvme_io_md": false, 00:05:08.824 "nvme_iov_md": false, 00:05:08.824 "read": true, 00:05:08.824 "reset": true, 00:05:08.824 "seek_data": false, 00:05:08.824 "seek_hole": false, 00:05:08.824 "unmap": true, 00:05:08.824 "write": true, 00:05:08.824 "write_zeroes": true, 00:05:08.824 "zcopy": true, 00:05:08.824 "zone_append": false, 00:05:08.824 "zone_management": false 00:05:08.824 }, 00:05:08.824 "uuid": "9955dff5-60cc-438b-afa1-520b64a91406", 00:05:08.824 "zoned": false 00:05:08.824 } 00:05:08.824 ]' 00:05:08.824 17:04:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.824 [2024-12-06 17:04:51.007397] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:08.824 [2024-12-06 17:04:51.007450] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.824 [2024-12-06 17:04:51.007471] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1adabe0 00:05:08.824 [2024-12-06 17:04:51.007481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.824 [2024-12-06 17:04:51.009000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.824 [2024-12-06 17:04:51.009037] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.824 Passthru0 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.824 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.824 { 00:05:08.824 "aliases": [ 00:05:08.824 "9955dff5-60cc-438b-afa1-520b64a91406" 00:05:08.824 ], 00:05:08.824 "assigned_rate_limits": { 00:05:08.824 "r_mbytes_per_sec": 0, 00:05:08.824 "rw_ios_per_sec": 0, 00:05:08.824 "rw_mbytes_per_sec": 0, 00:05:08.824 "w_mbytes_per_sec": 0 00:05:08.824 }, 00:05:08.824 "block_size": 512, 00:05:08.824 "claim_type": "exclusive_write", 00:05:08.824 "claimed": true, 00:05:08.824 "driver_specific": {}, 00:05:08.824 "memory_domains": [ 00:05:08.824 { 00:05:08.824 "dma_device_id": "system", 00:05:08.824 "dma_device_type": 1 00:05:08.824 }, 00:05:08.824 { 00:05:08.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.824 "dma_device_type": 2 00:05:08.824 } 00:05:08.824 ], 00:05:08.824 "name": "Malloc3", 00:05:08.824 "num_blocks": 16384, 00:05:08.824 "product_name": "Malloc disk", 00:05:08.824 "supported_io_types": { 00:05:08.824 "abort": true, 00:05:08.824 "compare": false, 00:05:08.824 "compare_and_write": false, 00:05:08.824 "copy": true, 00:05:08.824 "flush": true, 00:05:08.824 "get_zone_info": false, 00:05:08.824 "nvme_admin": false, 00:05:08.824 "nvme_io": false, 00:05:08.824 "nvme_io_md": false, 00:05:08.824 "nvme_iov_md": false, 00:05:08.824 "read": true, 00:05:08.824 "reset": true, 00:05:08.824 "seek_data": false, 00:05:08.824 "seek_hole": false, 00:05:08.824 "unmap": true, 00:05:08.824 "write": true, 00:05:08.824 "write_zeroes": true, 00:05:08.824 "zcopy": true, 00:05:08.824 "zone_append": false, 00:05:08.824 "zone_management": false 00:05:08.824 }, 00:05:08.824 "uuid": "9955dff5-60cc-438b-afa1-520b64a91406", 00:05:08.824 "zoned": false 00:05:08.824 }, 00:05:08.824 { 00:05:08.824 "aliases": [ 00:05:08.824 "9ff3ca73-8b36-5699-9ce0-179f6ec1bd82" 00:05:08.825 ], 00:05:08.825 "assigned_rate_limits": { 00:05:08.825 "r_mbytes_per_sec": 0, 00:05:08.825 "rw_ios_per_sec": 0, 00:05:08.825 "rw_mbytes_per_sec": 0, 00:05:08.825 "w_mbytes_per_sec": 0 00:05:08.825 }, 00:05:08.825 "block_size": 512, 00:05:08.825 "claimed": false, 00:05:08.825 "driver_specific": { 00:05:08.825 "passthru": { 00:05:08.825 "base_bdev_name": "Malloc3", 00:05:08.825 "name": "Passthru0" 00:05:08.825 } 00:05:08.825 }, 00:05:08.825 "memory_domains": [ 00:05:08.825 { 00:05:08.825 "dma_device_id": "system", 00:05:08.825 "dma_device_type": 1 00:05:08.825 }, 00:05:08.825 { 00:05:08.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.825 "dma_device_type": 2 00:05:08.825 } 00:05:08.825 ], 00:05:08.825 "name": "Passthru0", 00:05:08.825 "num_blocks": 16384, 00:05:08.825 "product_name": "passthru", 00:05:08.825 "supported_io_types": { 00:05:08.825 "abort": true, 00:05:08.825 "compare": false, 00:05:08.825 "compare_and_write": false, 00:05:08.825 "copy": true, 00:05:08.825 "flush": true, 00:05:08.825 "get_zone_info": false, 00:05:08.825 "nvme_admin": false, 00:05:08.825 "nvme_io": false, 00:05:08.825 "nvme_io_md": false, 00:05:08.825 "nvme_iov_md": false, 00:05:08.825 "read": true, 00:05:08.825 "reset": true, 00:05:08.825 "seek_data": false, 00:05:08.825 "seek_hole": false, 00:05:08.825 "unmap": true, 00:05:08.825 "write": true, 00:05:08.825 "write_zeroes": true, 00:05:08.825 "zcopy": true, 00:05:08.825 "zone_append": false, 00:05:08.825 "zone_management": false 00:05:08.825 }, 00:05:08.825 "uuid": "9ff3ca73-8b36-5699-9ce0-179f6ec1bd82", 00:05:08.825 "zoned": false 00:05:08.825 } 00:05:08.825 ]' 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.825 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.084 ************************************ 00:05:09.084 END TEST rpc_daemon_integrity 00:05:09.084 ************************************ 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.084 00:05:09.084 real 0m0.341s 00:05:09.084 user 0m0.235s 00:05:09.084 sys 0m0.039s 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.084 17:04:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.084 17:04:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:09.084 17:04:51 rpc -- rpc/rpc.sh@84 -- # killprocess 58612 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@954 -- # '[' -z 58612 ']' 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@958 -- # kill -0 58612 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58612 00:05:09.084 killing process with pid 58612 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58612' 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@973 -- # kill 58612 00:05:09.084 17:04:51 rpc -- common/autotest_common.sh@978 -- # wait 58612 00:05:09.343 00:05:09.343 real 0m2.529s 00:05:09.343 user 0m3.502s 00:05:09.343 sys 0m0.662s 00:05:09.343 17:04:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.343 ************************************ 00:05:09.343 END TEST rpc 00:05:09.343 17:04:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.343 ************************************ 00:05:09.343 17:04:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:09.343 17:04:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.343 17:04:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.343 17:04:51 -- common/autotest_common.sh@10 -- # set +x 00:05:09.343 ************************************ 00:05:09.343 START TEST skip_rpc 00:05:09.343 ************************************ 00:05:09.343 17:04:51 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:09.343 * Looking for test storage... 00:05:09.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:09.343 17:04:51 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.343 17:04:51 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.343 17:04:51 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.602 17:04:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.602 --rc genhtml_branch_coverage=1 00:05:09.602 --rc genhtml_function_coverage=1 00:05:09.602 --rc genhtml_legend=1 00:05:09.602 --rc geninfo_all_blocks=1 00:05:09.602 --rc geninfo_unexecuted_blocks=1 00:05:09.602 00:05:09.602 ' 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.602 --rc genhtml_branch_coverage=1 00:05:09.602 --rc genhtml_function_coverage=1 00:05:09.602 --rc genhtml_legend=1 00:05:09.602 --rc geninfo_all_blocks=1 00:05:09.602 --rc geninfo_unexecuted_blocks=1 00:05:09.602 00:05:09.602 ' 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.602 --rc genhtml_branch_coverage=1 00:05:09.602 --rc genhtml_function_coverage=1 00:05:09.602 --rc genhtml_legend=1 00:05:09.602 --rc geninfo_all_blocks=1 00:05:09.602 --rc geninfo_unexecuted_blocks=1 00:05:09.602 00:05:09.602 ' 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.602 --rc genhtml_branch_coverage=1 00:05:09.602 --rc genhtml_function_coverage=1 00:05:09.602 --rc genhtml_legend=1 00:05:09.602 --rc geninfo_all_blocks=1 00:05:09.602 --rc geninfo_unexecuted_blocks=1 00:05:09.602 00:05:09.602 ' 00:05:09.602 17:04:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.602 17:04:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:09.602 17:04:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.602 17:04:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.602 ************************************ 00:05:09.602 START TEST skip_rpc 00:05:09.602 ************************************ 00:05:09.602 17:04:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:09.602 17:04:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58862 00:05:09.602 17:04:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.602 17:04:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:09.602 17:04:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:09.602 [2024-12-06 17:04:51.814630] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:09.602 [2024-12-06 17:04:51.814723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58862 ] 00:05:09.862 [2024-12-06 17:04:51.967584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.862 [2024-12-06 17:04:52.008158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.160 2024/12/06 17:04:56 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58862 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58862 ']' 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58862 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58862 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.160 killing process with pid 58862 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58862' 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58862 00:05:15.160 17:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58862 00:05:15.160 ************************************ 00:05:15.160 END TEST skip_rpc 00:05:15.160 ************************************ 00:05:15.160 00:05:15.160 real 0m5.292s 00:05:15.160 user 0m5.002s 00:05:15.160 sys 0m0.203s 00:05:15.160 17:04:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.160 17:04:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.160 17:04:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:15.160 17:04:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.160 17:04:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.160 17:04:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.160 ************************************ 00:05:15.160 START TEST skip_rpc_with_json 00:05:15.160 ************************************ 00:05:15.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58955 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58955 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58955 ']' 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.160 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.160 [2024-12-06 17:04:57.162051] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:15.160 [2024-12-06 17:04:57.162424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58955 ] 00:05:15.160 [2024-12-06 17:04:57.317800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.160 [2024-12-06 17:04:57.356207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.420 [2024-12-06 17:04:57.543962] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:15.420 2024/12/06 17:04:57 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:05:15.420 request: 00:05:15.420 { 00:05:15.420 "method": "nvmf_get_transports", 00:05:15.420 "params": { 00:05:15.420 "trtype": "tcp" 00:05:15.420 } 00:05:15.420 } 00:05:15.420 Got JSON-RPC error response 00:05:15.420 GoRPCClient: error on JSON-RPC call 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.420 [2024-12-06 17:04:57.556119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.420 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.679 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.679 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.679 { 00:05:15.679 "subsystems": [ 00:05:15.679 { 00:05:15.679 "subsystem": "fsdev", 00:05:15.679 "config": [ 00:05:15.679 { 00:05:15.679 "method": "fsdev_set_opts", 00:05:15.679 "params": { 00:05:15.679 "fsdev_io_cache_size": 256, 00:05:15.679 "fsdev_io_pool_size": 65535 00:05:15.679 } 00:05:15.679 } 00:05:15.679 ] 00:05:15.679 }, 00:05:15.679 { 00:05:15.680 "subsystem": "keyring", 00:05:15.680 "config": [] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "iobuf", 00:05:15.680 "config": [ 00:05:15.680 { 00:05:15.680 "method": "iobuf_set_options", 00:05:15.680 "params": { 00:05:15.680 "enable_numa": false, 00:05:15.680 "large_bufsize": 135168, 00:05:15.680 "large_pool_count": 1024, 00:05:15.680 "small_bufsize": 8192, 00:05:15.680 "small_pool_count": 8192 00:05:15.680 } 00:05:15.680 } 00:05:15.680 ] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "sock", 00:05:15.680 "config": [ 00:05:15.680 { 00:05:15.680 "method": "sock_set_default_impl", 00:05:15.680 "params": { 00:05:15.680 "impl_name": "posix" 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "sock_impl_set_options", 00:05:15.680 "params": { 00:05:15.680 "enable_ktls": false, 00:05:15.680 "enable_placement_id": 0, 00:05:15.680 "enable_quickack": false, 00:05:15.680 "enable_recv_pipe": true, 00:05:15.680 "enable_zerocopy_send_client": false, 00:05:15.680 "enable_zerocopy_send_server": true, 00:05:15.680 "impl_name": "ssl", 00:05:15.680 "recv_buf_size": 4096, 00:05:15.680 "send_buf_size": 4096, 00:05:15.680 "tls_version": 0, 00:05:15.680 "zerocopy_threshold": 0 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "sock_impl_set_options", 00:05:15.680 "params": { 00:05:15.680 "enable_ktls": false, 00:05:15.680 "enable_placement_id": 0, 00:05:15.680 "enable_quickack": false, 00:05:15.680 "enable_recv_pipe": true, 00:05:15.680 "enable_zerocopy_send_client": false, 00:05:15.680 "enable_zerocopy_send_server": true, 00:05:15.680 "impl_name": "posix", 00:05:15.680 "recv_buf_size": 2097152, 00:05:15.680 "send_buf_size": 2097152, 00:05:15.680 "tls_version": 0, 00:05:15.680 "zerocopy_threshold": 0 00:05:15.680 } 00:05:15.680 } 00:05:15.680 ] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "vmd", 00:05:15.680 "config": [] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "accel", 00:05:15.680 "config": [ 00:05:15.680 { 00:05:15.680 "method": "accel_set_options", 00:05:15.680 "params": { 00:05:15.680 "buf_count": 2048, 00:05:15.680 "large_cache_size": 16, 00:05:15.680 "sequence_count": 2048, 00:05:15.680 "small_cache_size": 128, 00:05:15.680 "task_count": 2048 00:05:15.680 } 00:05:15.680 } 00:05:15.680 ] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "bdev", 00:05:15.680 "config": [ 00:05:15.680 { 00:05:15.680 "method": "bdev_set_options", 00:05:15.680 "params": { 00:05:15.680 "bdev_auto_examine": true, 00:05:15.680 "bdev_io_cache_size": 256, 00:05:15.680 "bdev_io_pool_size": 65535, 00:05:15.680 "iobuf_large_cache_size": 16, 00:05:15.680 "iobuf_small_cache_size": 128 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "bdev_raid_set_options", 00:05:15.680 "params": { 00:05:15.680 "process_max_bandwidth_mb_sec": 0, 00:05:15.680 "process_window_size_kb": 1024 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "bdev_iscsi_set_options", 00:05:15.680 "params": { 00:05:15.680 "timeout_sec": 30 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "bdev_nvme_set_options", 00:05:15.680 "params": { 00:05:15.680 "action_on_timeout": "none", 00:05:15.680 "allow_accel_sequence": false, 00:05:15.680 "arbitration_burst": 0, 00:05:15.680 "bdev_retry_count": 3, 00:05:15.680 "ctrlr_loss_timeout_sec": 0, 00:05:15.680 "delay_cmd_submit": true, 00:05:15.680 "dhchap_dhgroups": [ 00:05:15.680 "null", 00:05:15.680 "ffdhe2048", 00:05:15.680 "ffdhe3072", 00:05:15.680 "ffdhe4096", 00:05:15.680 "ffdhe6144", 00:05:15.680 "ffdhe8192" 00:05:15.680 ], 00:05:15.680 "dhchap_digests": [ 00:05:15.680 "sha256", 00:05:15.680 "sha384", 00:05:15.680 "sha512" 00:05:15.680 ], 00:05:15.680 "disable_auto_failback": false, 00:05:15.680 "fast_io_fail_timeout_sec": 0, 00:05:15.680 "generate_uuids": false, 00:05:15.680 "high_priority_weight": 0, 00:05:15.680 "io_path_stat": false, 00:05:15.680 "io_queue_requests": 0, 00:05:15.680 "keep_alive_timeout_ms": 10000, 00:05:15.680 "low_priority_weight": 0, 00:05:15.680 "medium_priority_weight": 0, 00:05:15.680 "nvme_adminq_poll_period_us": 10000, 00:05:15.680 "nvme_error_stat": false, 00:05:15.680 "nvme_ioq_poll_period_us": 0, 00:05:15.680 "rdma_cm_event_timeout_ms": 0, 00:05:15.680 "rdma_max_cq_size": 0, 00:05:15.680 "rdma_srq_size": 0, 00:05:15.680 "reconnect_delay_sec": 0, 00:05:15.680 "timeout_admin_us": 0, 00:05:15.680 "timeout_us": 0, 00:05:15.680 "transport_ack_timeout": 0, 00:05:15.680 "transport_retry_count": 4, 00:05:15.680 "transport_tos": 0 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "bdev_nvme_set_hotplug", 00:05:15.680 "params": { 00:05:15.680 "enable": false, 00:05:15.680 "period_us": 100000 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "bdev_wait_for_examine" 00:05:15.680 } 00:05:15.680 ] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "scsi", 00:05:15.680 "config": null 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "scheduler", 00:05:15.680 "config": [ 00:05:15.680 { 00:05:15.680 "method": "framework_set_scheduler", 00:05:15.680 "params": { 00:05:15.680 "name": "static" 00:05:15.680 } 00:05:15.680 } 00:05:15.680 ] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "vhost_scsi", 00:05:15.680 "config": [] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "vhost_blk", 00:05:15.680 "config": [] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "ublk", 00:05:15.680 "config": [] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "nbd", 00:05:15.680 "config": [] 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "subsystem": "nvmf", 00:05:15.680 "config": [ 00:05:15.680 { 00:05:15.680 "method": "nvmf_set_config", 00:05:15.680 "params": { 00:05:15.680 "admin_cmd_passthru": { 00:05:15.680 "identify_ctrlr": false 00:05:15.680 }, 00:05:15.680 "dhchap_dhgroups": [ 00:05:15.680 "null", 00:05:15.680 "ffdhe2048", 00:05:15.680 "ffdhe3072", 00:05:15.680 "ffdhe4096", 00:05:15.680 "ffdhe6144", 00:05:15.680 "ffdhe8192" 00:05:15.680 ], 00:05:15.680 "dhchap_digests": [ 00:05:15.680 "sha256", 00:05:15.680 "sha384", 00:05:15.680 "sha512" 00:05:15.680 ], 00:05:15.680 "discovery_filter": "match_any" 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "nvmf_set_max_subsystems", 00:05:15.680 "params": { 00:05:15.680 "max_subsystems": 1024 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "nvmf_set_crdt", 00:05:15.680 "params": { 00:05:15.680 "crdt1": 0, 00:05:15.680 "crdt2": 0, 00:05:15.680 "crdt3": 0 00:05:15.680 } 00:05:15.680 }, 00:05:15.680 { 00:05:15.680 "method": "nvmf_create_transport", 00:05:15.680 "params": { 00:05:15.680 "abort_timeout_sec": 1, 00:05:15.680 "ack_timeout": 0, 00:05:15.680 "buf_cache_size": 4294967295, 00:05:15.680 "c2h_success": true, 00:05:15.680 "data_wr_pool_size": 0, 00:05:15.680 "dif_insert_or_strip": false, 00:05:15.680 "in_capsule_data_size": 4096, 00:05:15.680 "io_unit_size": 131072, 00:05:15.680 "max_aq_depth": 128, 00:05:15.680 "max_io_qpairs_per_ctrlr": 127, 00:05:15.680 "max_io_size": 131072, 00:05:15.680 "max_queue_depth": 128, 00:05:15.680 "num_shared_buffers": 511, 00:05:15.680 "sock_priority": 0, 00:05:15.680 "trtype": "TCP", 00:05:15.680 "zcopy": false 00:05:15.680 } 00:05:15.680 } 00:05:15.680 ] 00:05:15.680 }, 00:05:15.680 { 00:05:15.681 "subsystem": "iscsi", 00:05:15.681 "config": [ 00:05:15.681 { 00:05:15.681 "method": "iscsi_set_options", 00:05:15.681 "params": { 00:05:15.681 "allow_duplicated_isid": false, 00:05:15.681 "chap_group": 0, 00:05:15.681 "data_out_pool_size": 2048, 00:05:15.681 "default_time2retain": 20, 00:05:15.681 "default_time2wait": 2, 00:05:15.681 "disable_chap": false, 00:05:15.681 "error_recovery_level": 0, 00:05:15.681 "first_burst_length": 8192, 00:05:15.681 "immediate_data": true, 00:05:15.681 "immediate_data_pool_size": 16384, 00:05:15.681 "max_connections_per_session": 2, 00:05:15.681 "max_large_datain_per_connection": 64, 00:05:15.681 "max_queue_depth": 64, 00:05:15.681 "max_r2t_per_connection": 4, 00:05:15.681 "max_sessions": 128, 00:05:15.681 "mutual_chap": false, 00:05:15.681 "node_base": "iqn.2016-06.io.spdk", 00:05:15.681 "nop_in_interval": 30, 00:05:15.681 "nop_timeout": 60, 00:05:15.681 "pdu_pool_size": 36864, 00:05:15.681 "require_chap": false 00:05:15.681 } 00:05:15.681 } 00:05:15.681 ] 00:05:15.681 } 00:05:15.681 ] 00:05:15.681 } 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58955 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58955 ']' 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58955 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58955 00:05:15.681 killing process with pid 58955 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58955' 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58955 00:05:15.681 17:04:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58955 00:05:15.939 17:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:15.939 17:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58981 00:05:15.939 17:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58981 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58981 ']' 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58981 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58981 00:05:21.207 killing process with pid 58981 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58981' 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58981 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58981 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:21.207 ************************************ 00:05:21.207 END TEST skip_rpc_with_json 00:05:21.207 ************************************ 00:05:21.207 00:05:21.207 real 0m6.207s 00:05:21.207 user 0m5.947s 00:05:21.207 sys 0m0.454s 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.207 17:05:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:21.207 17:05:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.207 17:05:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.207 17:05:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.207 ************************************ 00:05:21.207 START TEST skip_rpc_with_delay 00:05:21.207 ************************************ 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:21.207 [2024-12-06 17:05:03.412148] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.207 00:05:21.207 real 0m0.096s 00:05:21.207 user 0m0.062s 00:05:21.207 sys 0m0.033s 00:05:21.207 ************************************ 00:05:21.207 END TEST skip_rpc_with_delay 00:05:21.207 ************************************ 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.207 17:05:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:21.207 17:05:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:21.207 17:05:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:21.207 17:05:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:21.207 17:05:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.207 17:05:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.207 17:05:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.207 ************************************ 00:05:21.207 START TEST exit_on_failed_rpc_init 00:05:21.207 ************************************ 00:05:21.207 17:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:21.207 17:05:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59085 00:05:21.207 17:05:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59085 00:05:21.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.207 17:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59085 ']' 00:05:21.207 17:05:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.208 17:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.208 17:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.208 17:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.208 17:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.208 17:05:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:21.467 [2024-12-06 17:05:03.560059] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:21.467 [2024-12-06 17:05:03.560163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59085 ] 00:05:21.467 [2024-12-06 17:05:03.718193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.467 [2024-12-06 17:05:03.757334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:22.404 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:22.404 [2024-12-06 17:05:04.651701] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:22.404 [2024-12-06 17:05:04.651794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59115 ] 00:05:22.664 [2024-12-06 17:05:04.801353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.664 [2024-12-06 17:05:04.843995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.664 [2024-12-06 17:05:04.844119] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:22.664 [2024-12-06 17:05:04.844150] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:22.664 [2024-12-06 17:05:04.844166] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59085 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59085 ']' 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59085 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59085 00:05:22.664 killing process with pid 59085 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59085' 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59085 00:05:22.664 17:05:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59085 00:05:22.923 00:05:22.923 real 0m1.694s 00:05:22.923 user 0m2.088s 00:05:22.923 sys 0m0.327s 00:05:22.923 17:05:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.923 17:05:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.923 ************************************ 00:05:22.923 END TEST exit_on_failed_rpc_init 00:05:22.923 ************************************ 00:05:23.181 17:05:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:23.181 ************************************ 00:05:23.181 END TEST skip_rpc 00:05:23.181 ************************************ 00:05:23.181 00:05:23.181 real 0m13.684s 00:05:23.181 user 0m13.271s 00:05:23.181 sys 0m1.240s 00:05:23.181 17:05:05 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.181 17:05:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.181 17:05:05 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:23.181 17:05:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.181 17:05:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.181 17:05:05 -- common/autotest_common.sh@10 -- # set +x 00:05:23.181 ************************************ 00:05:23.181 START TEST rpc_client 00:05:23.181 ************************************ 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:23.181 * Looking for test storage... 00:05:23.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.181 17:05:05 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.181 --rc genhtml_branch_coverage=1 00:05:23.181 --rc genhtml_function_coverage=1 00:05:23.181 --rc genhtml_legend=1 00:05:23.181 --rc geninfo_all_blocks=1 00:05:23.181 --rc geninfo_unexecuted_blocks=1 00:05:23.181 00:05:23.181 ' 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.181 --rc genhtml_branch_coverage=1 00:05:23.181 --rc genhtml_function_coverage=1 00:05:23.181 --rc genhtml_legend=1 00:05:23.181 --rc geninfo_all_blocks=1 00:05:23.181 --rc geninfo_unexecuted_blocks=1 00:05:23.181 00:05:23.181 ' 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.181 --rc genhtml_branch_coverage=1 00:05:23.181 --rc genhtml_function_coverage=1 00:05:23.181 --rc genhtml_legend=1 00:05:23.181 --rc geninfo_all_blocks=1 00:05:23.181 --rc geninfo_unexecuted_blocks=1 00:05:23.181 00:05:23.181 ' 00:05:23.181 17:05:05 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.181 --rc genhtml_branch_coverage=1 00:05:23.181 --rc genhtml_function_coverage=1 00:05:23.181 --rc genhtml_legend=1 00:05:23.181 --rc geninfo_all_blocks=1 00:05:23.181 --rc geninfo_unexecuted_blocks=1 00:05:23.181 00:05:23.181 ' 00:05:23.181 17:05:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:23.439 OK 00:05:23.439 17:05:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:23.439 00:05:23.439 real 0m0.213s 00:05:23.439 user 0m0.134s 00:05:23.439 sys 0m0.086s 00:05:23.439 ************************************ 00:05:23.439 END TEST rpc_client 00:05:23.439 ************************************ 00:05:23.439 17:05:05 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.439 17:05:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:23.439 17:05:05 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:23.439 17:05:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.439 17:05:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.439 17:05:05 -- common/autotest_common.sh@10 -- # set +x 00:05:23.439 ************************************ 00:05:23.439 START TEST json_config 00:05:23.439 ************************************ 00:05:23.439 17:05:05 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:23.439 17:05:05 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.439 17:05:05 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.439 17:05:05 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.439 17:05:05 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.439 17:05:05 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.439 17:05:05 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.439 17:05:05 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.440 17:05:05 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.440 17:05:05 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.440 17:05:05 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.440 17:05:05 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.440 17:05:05 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.440 17:05:05 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.440 17:05:05 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.440 17:05:05 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.440 17:05:05 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:23.440 17:05:05 json_config -- scripts/common.sh@345 -- # : 1 00:05:23.440 17:05:05 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.440 17:05:05 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.440 17:05:05 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:23.440 17:05:05 json_config -- scripts/common.sh@353 -- # local d=1 00:05:23.440 17:05:05 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.440 17:05:05 json_config -- scripts/common.sh@355 -- # echo 1 00:05:23.440 17:05:05 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.440 17:05:05 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:23.440 17:05:05 json_config -- scripts/common.sh@353 -- # local d=2 00:05:23.440 17:05:05 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.440 17:05:05 json_config -- scripts/common.sh@355 -- # echo 2 00:05:23.440 17:05:05 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.440 17:05:05 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.440 17:05:05 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.440 17:05:05 json_config -- scripts/common.sh@368 -- # return 0 00:05:23.440 17:05:05 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.440 17:05:05 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.440 --rc genhtml_branch_coverage=1 00:05:23.440 --rc genhtml_function_coverage=1 00:05:23.440 --rc genhtml_legend=1 00:05:23.440 --rc geninfo_all_blocks=1 00:05:23.440 --rc geninfo_unexecuted_blocks=1 00:05:23.440 00:05:23.440 ' 00:05:23.440 17:05:05 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.440 --rc genhtml_branch_coverage=1 00:05:23.440 --rc genhtml_function_coverage=1 00:05:23.440 --rc genhtml_legend=1 00:05:23.440 --rc geninfo_all_blocks=1 00:05:23.440 --rc geninfo_unexecuted_blocks=1 00:05:23.440 00:05:23.440 ' 00:05:23.440 17:05:05 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.440 --rc genhtml_branch_coverage=1 00:05:23.440 --rc genhtml_function_coverage=1 00:05:23.440 --rc genhtml_legend=1 00:05:23.440 --rc geninfo_all_blocks=1 00:05:23.440 --rc geninfo_unexecuted_blocks=1 00:05:23.440 00:05:23.440 ' 00:05:23.440 17:05:05 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.440 --rc genhtml_branch_coverage=1 00:05:23.440 --rc genhtml_function_coverage=1 00:05:23.440 --rc genhtml_legend=1 00:05:23.440 --rc geninfo_all_blocks=1 00:05:23.440 --rc geninfo_unexecuted_blocks=1 00:05:23.440 00:05:23.440 ' 00:05:23.440 17:05:05 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:23.440 17:05:05 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.440 17:05:05 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.440 17:05:05 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.440 17:05:05 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.440 17:05:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.440 17:05:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.440 17:05:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.440 17:05:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:23.440 17:05:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@51 -- # : 0 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.440 17:05:05 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.441 17:05:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.441 17:05:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.441 17:05:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.441 17:05:05 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.441 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.441 17:05:05 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.441 17:05:05 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.441 17:05:05 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:23.441 INFO: JSON configuration test init 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:23.441 17:05:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.441 17:05:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.441 17:05:05 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:23.441 17:05:05 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.441 17:05:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.699 Waiting for target to run... 00:05:23.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.699 17:05:05 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:23.699 17:05:05 json_config -- json_config/common.sh@9 -- # local app=target 00:05:23.699 17:05:05 json_config -- json_config/common.sh@10 -- # shift 00:05:23.699 17:05:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.699 17:05:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.699 17:05:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.699 17:05:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.699 17:05:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.699 17:05:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59254 00:05:23.699 17:05:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.699 17:05:05 json_config -- json_config/common.sh@25 -- # waitforlisten 59254 /var/tmp/spdk_tgt.sock 00:05:23.699 17:05:05 json_config -- common/autotest_common.sh@835 -- # '[' -z 59254 ']' 00:05:23.699 17:05:05 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.699 17:05:05 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:23.699 17:05:05 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.699 17:05:05 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.699 17:05:05 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.699 17:05:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.699 [2024-12-06 17:05:05.813990] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:23.699 [2024-12-06 17:05:05.814305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59254 ] 00:05:23.958 [2024-12-06 17:05:06.130453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.958 [2024-12-06 17:05:06.162872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.893 17:05:06 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.893 17:05:06 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:24.893 17:05:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:24.893 00:05:24.893 17:05:06 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:24.893 17:05:06 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:24.893 17:05:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.893 17:05:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.893 17:05:06 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:24.893 17:05:06 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:24.893 17:05:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.893 17:05:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.893 17:05:06 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:24.893 17:05:06 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:24.893 17:05:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:25.152 17:05:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.152 17:05:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:25.152 17:05:07 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:25.152 17:05:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@54 -- # sort 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:25.719 17:05:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.719 17:05:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:25.719 17:05:07 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.719 17:05:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:25.719 17:05:07 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:25.719 17:05:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:26.000 MallocForNvmf0 00:05:26.000 17:05:08 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.000 17:05:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:26.264 MallocForNvmf1 00:05:26.264 17:05:08 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:26.264 17:05:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:26.523 [2024-12-06 17:05:08.656938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.523 17:05:08 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:26.523 17:05:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:26.781 17:05:08 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:26.781 17:05:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:27.038 17:05:09 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:27.038 17:05:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:27.604 17:05:09 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:27.604 17:05:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:27.604 [2024-12-06 17:05:09.865596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:27.604 17:05:09 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:27.604 17:05:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.604 17:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.862 17:05:09 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:27.862 17:05:09 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:27.862 17:05:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.862 17:05:09 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:27.862 17:05:09 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:27.862 17:05:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:28.121 MallocBdevForConfigChangeCheck 00:05:28.121 17:05:10 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:28.121 17:05:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.121 17:05:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.121 17:05:10 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:28.121 17:05:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.688 INFO: shutting down applications... 00:05:28.688 17:05:10 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:28.688 17:05:10 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:28.688 17:05:10 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:28.688 17:05:10 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:28.688 17:05:10 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:28.947 Calling clear_iscsi_subsystem 00:05:28.947 Calling clear_nvmf_subsystem 00:05:28.947 Calling clear_nbd_subsystem 00:05:28.947 Calling clear_ublk_subsystem 00:05:28.947 Calling clear_vhost_blk_subsystem 00:05:28.947 Calling clear_vhost_scsi_subsystem 00:05:28.947 Calling clear_bdev_subsystem 00:05:28.947 17:05:11 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:28.947 17:05:11 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:28.947 17:05:11 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:28.947 17:05:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.947 17:05:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:28.947 17:05:11 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:29.513 17:05:11 json_config -- json_config/json_config.sh@352 -- # break 00:05:29.513 17:05:11 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:29.513 17:05:11 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:29.513 17:05:11 json_config -- json_config/common.sh@31 -- # local app=target 00:05:29.513 17:05:11 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:29.513 17:05:11 json_config -- json_config/common.sh@35 -- # [[ -n 59254 ]] 00:05:29.513 17:05:11 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59254 00:05:29.513 17:05:11 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:29.513 17:05:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.513 17:05:11 json_config -- json_config/common.sh@41 -- # kill -0 59254 00:05:29.513 17:05:11 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.772 17:05:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.772 17:05:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.772 17:05:12 json_config -- json_config/common.sh@41 -- # kill -0 59254 00:05:29.772 17:05:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:29.772 17:05:12 json_config -- json_config/common.sh@43 -- # break 00:05:29.772 17:05:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:29.772 SPDK target shutdown done 00:05:29.772 17:05:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:29.772 INFO: relaunching applications... 00:05:29.772 17:05:12 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:29.772 17:05:12 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.772 17:05:12 json_config -- json_config/common.sh@9 -- # local app=target 00:05:29.772 17:05:12 json_config -- json_config/common.sh@10 -- # shift 00:05:29.772 17:05:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:29.772 17:05:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:29.772 17:05:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:29.772 17:05:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.772 17:05:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:29.772 17:05:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59540 00:05:29.772 17:05:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:29.772 Waiting for target to run... 00:05:29.772 17:05:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:29.772 17:05:12 json_config -- json_config/common.sh@25 -- # waitforlisten 59540 /var/tmp/spdk_tgt.sock 00:05:29.772 17:05:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 59540 ']' 00:05:29.772 17:05:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.772 17:05:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.772 17:05:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.772 17:05:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.772 17:05:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.030 [2024-12-06 17:05:12.107802] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:30.030 [2024-12-06 17:05:12.107903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59540 ] 00:05:30.286 [2024-12-06 17:05:12.409456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.287 [2024-12-06 17:05:12.450387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.543 [2024-12-06 17:05:12.776160] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.543 [2024-12-06 17:05:12.808235] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:30.799 17:05:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.799 17:05:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:30.799 00:05:30.799 17:05:13 json_config -- json_config/common.sh@26 -- # echo '' 00:05:30.799 17:05:13 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:30.799 INFO: Checking if target configuration is the same... 00:05:30.799 17:05:13 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:30.799 17:05:13 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:30.799 17:05:13 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:30.799 17:05:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.799 + '[' 2 -ne 2 ']' 00:05:30.799 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:30.799 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:30.799 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:30.799 +++ basename /dev/fd/62 00:05:31.055 ++ mktemp /tmp/62.XXX 00:05:31.055 + tmp_file_1=/tmp/62.YsA 00:05:31.055 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.055 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:31.055 + tmp_file_2=/tmp/spdk_tgt_config.json.rsg 00:05:31.055 + ret=0 00:05:31.055 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:31.311 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:31.311 + diff -u /tmp/62.YsA /tmp/spdk_tgt_config.json.rsg 00:05:31.311 INFO: JSON config files are the same 00:05:31.311 + echo 'INFO: JSON config files are the same' 00:05:31.311 + rm /tmp/62.YsA /tmp/spdk_tgt_config.json.rsg 00:05:31.311 + exit 0 00:05:31.311 17:05:13 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:31.311 INFO: changing configuration and checking if this can be detected... 00:05:31.311 17:05:13 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:31.311 17:05:13 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:31.311 17:05:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:31.618 17:05:13 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.618 17:05:13 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:31.618 17:05:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.618 + '[' 2 -ne 2 ']' 00:05:31.618 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:31.618 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:31.618 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:31.618 +++ basename /dev/fd/62 00:05:31.618 ++ mktemp /tmp/62.XXX 00:05:31.618 + tmp_file_1=/tmp/62.02P 00:05:31.618 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:31.618 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:31.618 + tmp_file_2=/tmp/spdk_tgt_config.json.ODH 00:05:31.618 + ret=0 00:05:31.618 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.182 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:32.182 + diff -u /tmp/62.02P /tmp/spdk_tgt_config.json.ODH 00:05:32.182 + ret=1 00:05:32.182 + echo '=== Start of file: /tmp/62.02P ===' 00:05:32.182 + cat /tmp/62.02P 00:05:32.182 + echo '=== End of file: /tmp/62.02P ===' 00:05:32.182 + echo '' 00:05:32.182 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ODH ===' 00:05:32.182 + cat /tmp/spdk_tgt_config.json.ODH 00:05:32.182 + echo '=== End of file: /tmp/spdk_tgt_config.json.ODH ===' 00:05:32.182 + echo '' 00:05:32.182 + rm /tmp/62.02P /tmp/spdk_tgt_config.json.ODH 00:05:32.182 + exit 1 00:05:32.182 INFO: configuration change detected. 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@324 -- # [[ -n 59540 ]] 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.182 17:05:14 json_config -- json_config/json_config.sh@330 -- # killprocess 59540 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@954 -- # '[' -z 59540 ']' 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@958 -- # kill -0 59540 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@959 -- # uname 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59540 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.182 killing process with pid 59540 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59540' 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@973 -- # kill 59540 00:05:32.182 17:05:14 json_config -- common/autotest_common.sh@978 -- # wait 59540 00:05:32.439 17:05:14 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:32.439 17:05:14 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:32.439 17:05:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.439 17:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 17:05:14 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:32.439 INFO: Success 00:05:32.439 17:05:14 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:32.439 ************************************ 00:05:32.439 END TEST json_config 00:05:32.439 ************************************ 00:05:32.439 00:05:32.439 real 0m9.120s 00:05:32.439 user 0m13.561s 00:05:32.439 sys 0m1.592s 00:05:32.439 17:05:14 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.439 17:05:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 17:05:14 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:32.439 17:05:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.439 17:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.439 17:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:32.439 ************************************ 00:05:32.439 START TEST json_config_extra_key 00:05:32.439 ************************************ 00:05:32.439 17:05:14 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:32.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.698 --rc genhtml_branch_coverage=1 00:05:32.698 --rc genhtml_function_coverage=1 00:05:32.698 --rc genhtml_legend=1 00:05:32.698 --rc geninfo_all_blocks=1 00:05:32.698 --rc geninfo_unexecuted_blocks=1 00:05:32.698 00:05:32.698 ' 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:32.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.698 --rc genhtml_branch_coverage=1 00:05:32.698 --rc genhtml_function_coverage=1 00:05:32.698 --rc genhtml_legend=1 00:05:32.698 --rc geninfo_all_blocks=1 00:05:32.698 --rc geninfo_unexecuted_blocks=1 00:05:32.698 00:05:32.698 ' 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:32.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.698 --rc genhtml_branch_coverage=1 00:05:32.698 --rc genhtml_function_coverage=1 00:05:32.698 --rc genhtml_legend=1 00:05:32.698 --rc geninfo_all_blocks=1 00:05:32.698 --rc geninfo_unexecuted_blocks=1 00:05:32.698 00:05:32.698 ' 00:05:32.698 17:05:14 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:32.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.698 --rc genhtml_branch_coverage=1 00:05:32.698 --rc genhtml_function_coverage=1 00:05:32.698 --rc genhtml_legend=1 00:05:32.698 --rc geninfo_all_blocks=1 00:05:32.698 --rc geninfo_unexecuted_blocks=1 00:05:32.698 00:05:32.698 ' 00:05:32.698 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.698 17:05:14 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.698 17:05:14 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.698 17:05:14 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.698 17:05:14 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.699 17:05:14 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.699 17:05:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:32.699 17:05:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:32.699 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:32.699 17:05:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:32.699 INFO: launching applications... 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:32.699 17:05:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59719 00:05:32.699 Waiting for target to run... 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59719 /var/tmp/spdk_tgt.sock 00:05:32.699 17:05:14 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:32.699 17:05:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59719 ']' 00:05:32.699 17:05:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:32.699 17:05:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:32.699 17:05:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:32.699 17:05:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.699 17:05:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:32.699 [2024-12-06 17:05:14.967557] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:32.699 [2024-12-06 17:05:14.967672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59719 ] 00:05:33.264 [2024-12-06 17:05:15.278573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.264 [2024-12-06 17:05:15.318744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.880 17:05:15 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.880 00:05:33.880 17:05:15 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:33.880 INFO: shutting down applications... 00:05:33.880 17:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:33.880 17:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59719 ]] 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59719 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59719 00:05:33.880 17:05:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:34.464 17:05:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:34.464 17:05:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:34.464 17:05:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59719 00:05:34.464 17:05:16 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:34.464 17:05:16 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:34.464 17:05:16 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:34.464 17:05:16 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:34.464 SPDK target shutdown done 00:05:34.464 Success 00:05:34.464 17:05:16 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:34.464 00:05:34.464 real 0m1.783s 00:05:34.464 user 0m1.664s 00:05:34.464 sys 0m0.314s 00:05:34.464 17:05:16 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.464 17:05:16 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:34.464 ************************************ 00:05:34.464 END TEST json_config_extra_key 00:05:34.464 ************************************ 00:05:34.465 17:05:16 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.465 17:05:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.465 17:05:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.465 17:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:34.465 ************************************ 00:05:34.465 START TEST alias_rpc 00:05:34.465 ************************************ 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:34.465 * Looking for test storage... 00:05:34.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.465 17:05:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.465 --rc genhtml_branch_coverage=1 00:05:34.465 --rc genhtml_function_coverage=1 00:05:34.465 --rc genhtml_legend=1 00:05:34.465 --rc geninfo_all_blocks=1 00:05:34.465 --rc geninfo_unexecuted_blocks=1 00:05:34.465 00:05:34.465 ' 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.465 --rc genhtml_branch_coverage=1 00:05:34.465 --rc genhtml_function_coverage=1 00:05:34.465 --rc genhtml_legend=1 00:05:34.465 --rc geninfo_all_blocks=1 00:05:34.465 --rc geninfo_unexecuted_blocks=1 00:05:34.465 00:05:34.465 ' 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.465 --rc genhtml_branch_coverage=1 00:05:34.465 --rc genhtml_function_coverage=1 00:05:34.465 --rc genhtml_legend=1 00:05:34.465 --rc geninfo_all_blocks=1 00:05:34.465 --rc geninfo_unexecuted_blocks=1 00:05:34.465 00:05:34.465 ' 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.465 --rc genhtml_branch_coverage=1 00:05:34.465 --rc genhtml_function_coverage=1 00:05:34.465 --rc genhtml_legend=1 00:05:34.465 --rc geninfo_all_blocks=1 00:05:34.465 --rc geninfo_unexecuted_blocks=1 00:05:34.465 00:05:34.465 ' 00:05:34.465 17:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:34.465 17:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59809 00:05:34.465 17:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59809 00:05:34.465 17:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59809 ']' 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.465 17:05:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.724 [2024-12-06 17:05:16.792598] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:34.724 [2024-12-06 17:05:16.792873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59809 ] 00:05:34.724 [2024-12-06 17:05:17.014744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.983 [2024-12-06 17:05:17.057607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.551 17:05:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.551 17:05:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.551 17:05:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:36.118 17:05:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59809 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59809 ']' 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59809 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59809 00:05:36.118 killing process with pid 59809 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59809' 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@973 -- # kill 59809 00:05:36.118 17:05:18 alias_rpc -- common/autotest_common.sh@978 -- # wait 59809 00:05:36.378 ************************************ 00:05:36.378 END TEST alias_rpc 00:05:36.378 ************************************ 00:05:36.378 00:05:36.378 real 0m1.904s 00:05:36.378 user 0m2.370s 00:05:36.378 sys 0m0.365s 00:05:36.378 17:05:18 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.378 17:05:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.378 17:05:18 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:05:36.378 17:05:18 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.378 17:05:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.378 17:05:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.378 17:05:18 -- common/autotest_common.sh@10 -- # set +x 00:05:36.378 ************************************ 00:05:36.378 START TEST dpdk_mem_utility 00:05:36.378 ************************************ 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:36.378 * Looking for test storage... 00:05:36.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:36.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.378 17:05:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.378 --rc genhtml_branch_coverage=1 00:05:36.378 --rc genhtml_function_coverage=1 00:05:36.378 --rc genhtml_legend=1 00:05:36.378 --rc geninfo_all_blocks=1 00:05:36.378 --rc geninfo_unexecuted_blocks=1 00:05:36.378 00:05:36.378 ' 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.378 --rc genhtml_branch_coverage=1 00:05:36.378 --rc genhtml_function_coverage=1 00:05:36.378 --rc genhtml_legend=1 00:05:36.378 --rc geninfo_all_blocks=1 00:05:36.378 --rc geninfo_unexecuted_blocks=1 00:05:36.378 00:05:36.378 ' 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.378 --rc genhtml_branch_coverage=1 00:05:36.378 --rc genhtml_function_coverage=1 00:05:36.378 --rc genhtml_legend=1 00:05:36.378 --rc geninfo_all_blocks=1 00:05:36.378 --rc geninfo_unexecuted_blocks=1 00:05:36.378 00:05:36.378 ' 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.378 --rc genhtml_branch_coverage=1 00:05:36.378 --rc genhtml_function_coverage=1 00:05:36.378 --rc genhtml_legend=1 00:05:36.378 --rc geninfo_all_blocks=1 00:05:36.378 --rc geninfo_unexecuted_blocks=1 00:05:36.378 00:05:36.378 ' 00:05:36.378 17:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:36.378 17:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59909 00:05:36.378 17:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59909 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59909 ']' 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.378 17:05:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.378 17:05:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.637 [2024-12-06 17:05:18.724487] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:36.637 [2024-12-06 17:05:18.724607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59909 ] 00:05:36.637 [2024-12-06 17:05:18.867965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.638 [2024-12-06 17:05:18.899166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.896 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.896 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:36.896 17:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:36.896 17:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:36.896 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.896 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.896 { 00:05:36.896 "filename": "/tmp/spdk_mem_dump.txt" 00:05:36.896 } 00:05:36.896 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.896 17:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:36.896 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:36.896 1 heaps totaling size 818.000000 MiB 00:05:36.896 size: 818.000000 MiB heap id: 0 00:05:36.896 end heaps---------- 00:05:36.896 9 mempools totaling size 603.782043 MiB 00:05:36.896 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:36.896 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:36.896 size: 100.555481 MiB name: bdev_io_59909 00:05:36.896 size: 50.003479 MiB name: msgpool_59909 00:05:36.896 size: 36.509338 MiB name: fsdev_io_59909 00:05:36.896 size: 21.763794 MiB name: PDU_Pool 00:05:36.896 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:36.896 size: 4.133484 MiB name: evtpool_59909 00:05:36.897 size: 0.026123 MiB name: Session_Pool 00:05:36.897 end mempools------- 00:05:36.897 6 memzones totaling size 4.142822 MiB 00:05:36.897 size: 1.000366 MiB name: RG_ring_0_59909 00:05:36.897 size: 1.000366 MiB name: RG_ring_1_59909 00:05:36.897 size: 1.000366 MiB name: RG_ring_4_59909 00:05:36.897 size: 1.000366 MiB name: RG_ring_5_59909 00:05:36.897 size: 0.125366 MiB name: RG_ring_2_59909 00:05:36.897 size: 0.015991 MiB name: RG_ring_3_59909 00:05:36.897 end memzones------- 00:05:36.897 17:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:37.156 heap id: 0 total size: 818.000000 MiB number of busy elements: 236 number of free elements: 15 00:05:37.156 list of free elements. size: 10.817322 MiB 00:05:37.156 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:37.156 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:37.156 element at address: 0x200000400000 with size: 0.996338 MiB 00:05:37.156 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:37.156 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:37.156 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:37.156 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:37.156 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:37.156 element at address: 0x20001ae00000 with size: 0.571899 MiB 00:05:37.156 element at address: 0x200000c00000 with size: 0.490662 MiB 00:05:37.156 element at address: 0x20000a600000 with size: 0.489441 MiB 00:05:37.156 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:37.156 element at address: 0x200003e00000 with size: 0.481018 MiB 00:05:37.156 element at address: 0x200028200000 with size: 0.396667 MiB 00:05:37.156 element at address: 0x200000800000 with size: 0.353394 MiB 00:05:37.156 list of standard malloc elements. size: 199.253784 MiB 00:05:37.156 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:37.156 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:37.156 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:37.156 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:37.156 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:37.156 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:37.156 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:37.156 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:37.156 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:37.156 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:37.156 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:37.156 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:37.156 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:37.156 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:37.156 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:37.156 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000085a780 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000085a980 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:37.157 element at address: 0x2000282658c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x200028265980 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20002826c580 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:37.157 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:37.158 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:37.158 list of memzone associated elements. size: 607.928894 MiB 00:05:37.158 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:37.158 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:37.158 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:37.158 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:37.158 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:37.158 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59909_0 00:05:37.158 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:37.158 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59909_0 00:05:37.158 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:37.158 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59909_0 00:05:37.158 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:37.158 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:37.158 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:37.158 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:37.158 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:37.158 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59909_0 00:05:37.158 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:37.158 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59909 00:05:37.158 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:37.158 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59909 00:05:37.158 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:37.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:37.158 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:37.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:37.158 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:37.158 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:37.158 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:37.158 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:37.158 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:37.158 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59909 00:05:37.158 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:37.158 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59909 00:05:37.158 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:37.158 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59909 00:05:37.158 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:37.158 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59909 00:05:37.158 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:37.158 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59909 00:05:37.158 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:37.158 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59909 00:05:37.158 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:37.158 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:37.158 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:37.158 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:37.158 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:37.158 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:37.158 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:37.158 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59909 00:05:37.158 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:05:37.158 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59909 00:05:37.158 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:37.158 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:37.158 element at address: 0x200028265a40 with size: 0.023743 MiB 00:05:37.158 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:37.158 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:05:37.158 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59909 00:05:37.158 element at address: 0x20002826bb80 with size: 0.002441 MiB 00:05:37.158 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:37.158 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:37.158 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59909 00:05:37.158 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:37.158 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59909 00:05:37.158 element at address: 0x20000085a840 with size: 0.000305 MiB 00:05:37.158 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59909 00:05:37.158 element at address: 0x20002826c640 with size: 0.000305 MiB 00:05:37.158 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:37.158 17:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:37.158 17:05:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59909 00:05:37.158 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59909 ']' 00:05:37.158 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59909 00:05:37.158 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:37.158 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.158 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59909 00:05:37.158 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.158 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.158 killing process with pid 59909 00:05:37.158 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59909' 00:05:37.159 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59909 00:05:37.159 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59909 00:05:37.417 ************************************ 00:05:37.417 END TEST dpdk_mem_utility 00:05:37.417 ************************************ 00:05:37.417 00:05:37.417 real 0m1.028s 00:05:37.417 user 0m1.091s 00:05:37.417 sys 0m0.317s 00:05:37.417 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.417 17:05:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.417 17:05:19 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:37.417 17:05:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.417 17:05:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.417 17:05:19 -- common/autotest_common.sh@10 -- # set +x 00:05:37.417 ************************************ 00:05:37.417 START TEST event 00:05:37.417 ************************************ 00:05:37.417 17:05:19 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:37.417 * Looking for test storage... 00:05:37.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:37.417 17:05:19 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.417 17:05:19 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.417 17:05:19 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.676 17:05:19 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.676 17:05:19 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.676 17:05:19 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.676 17:05:19 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.676 17:05:19 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.676 17:05:19 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.676 17:05:19 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.676 17:05:19 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.676 17:05:19 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.676 17:05:19 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.676 17:05:19 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.676 17:05:19 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.676 17:05:19 event -- scripts/common.sh@344 -- # case "$op" in 00:05:37.676 17:05:19 event -- scripts/common.sh@345 -- # : 1 00:05:37.676 17:05:19 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.676 17:05:19 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.676 17:05:19 event -- scripts/common.sh@365 -- # decimal 1 00:05:37.676 17:05:19 event -- scripts/common.sh@353 -- # local d=1 00:05:37.676 17:05:19 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.676 17:05:19 event -- scripts/common.sh@355 -- # echo 1 00:05:37.676 17:05:19 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.676 17:05:19 event -- scripts/common.sh@366 -- # decimal 2 00:05:37.676 17:05:19 event -- scripts/common.sh@353 -- # local d=2 00:05:37.676 17:05:19 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.676 17:05:19 event -- scripts/common.sh@355 -- # echo 2 00:05:37.676 17:05:19 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.676 17:05:19 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.676 17:05:19 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.676 17:05:19 event -- scripts/common.sh@368 -- # return 0 00:05:37.676 17:05:19 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.676 17:05:19 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.677 --rc genhtml_branch_coverage=1 00:05:37.677 --rc genhtml_function_coverage=1 00:05:37.677 --rc genhtml_legend=1 00:05:37.677 --rc geninfo_all_blocks=1 00:05:37.677 --rc geninfo_unexecuted_blocks=1 00:05:37.677 00:05:37.677 ' 00:05:37.677 17:05:19 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.677 --rc genhtml_branch_coverage=1 00:05:37.677 --rc genhtml_function_coverage=1 00:05:37.677 --rc genhtml_legend=1 00:05:37.677 --rc geninfo_all_blocks=1 00:05:37.677 --rc geninfo_unexecuted_blocks=1 00:05:37.677 00:05:37.677 ' 00:05:37.677 17:05:19 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.677 --rc genhtml_branch_coverage=1 00:05:37.677 --rc genhtml_function_coverage=1 00:05:37.677 --rc genhtml_legend=1 00:05:37.677 --rc geninfo_all_blocks=1 00:05:37.677 --rc geninfo_unexecuted_blocks=1 00:05:37.677 00:05:37.677 ' 00:05:37.677 17:05:19 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.677 --rc genhtml_branch_coverage=1 00:05:37.677 --rc genhtml_function_coverage=1 00:05:37.677 --rc genhtml_legend=1 00:05:37.677 --rc geninfo_all_blocks=1 00:05:37.677 --rc geninfo_unexecuted_blocks=1 00:05:37.677 00:05:37.677 ' 00:05:37.677 17:05:19 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:37.677 17:05:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:37.677 17:05:19 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.677 17:05:19 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:37.677 17:05:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.677 17:05:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.677 ************************************ 00:05:37.677 START TEST event_perf 00:05:37.677 ************************************ 00:05:37.677 17:05:19 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:37.677 Running I/O for 1 seconds...[2024-12-06 17:05:19.763060] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:37.677 [2024-12-06 17:05:19.763247] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59993 ] 00:05:37.677 [2024-12-06 17:05:19.904648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:37.677 [2024-12-06 17:05:19.940627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.677 [2024-12-06 17:05:19.940704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:37.677 [2024-12-06 17:05:19.940825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.677 [2024-12-06 17:05:19.940830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.054 Running I/O for 1 seconds... 00:05:39.054 lcore 0: 201244 00:05:39.054 lcore 1: 201243 00:05:39.054 lcore 2: 201243 00:05:39.054 lcore 3: 201243 00:05:39.054 done. 00:05:39.054 00:05:39.054 real 0m1.241s 00:05:39.054 user 0m4.080s 00:05:39.054 sys 0m0.038s 00:05:39.054 17:05:20 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.054 17:05:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.054 ************************************ 00:05:39.054 END TEST event_perf 00:05:39.054 ************************************ 00:05:39.054 17:05:21 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:39.054 17:05:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:39.054 17:05:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.054 17:05:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.054 ************************************ 00:05:39.054 START TEST event_reactor 00:05:39.054 ************************************ 00:05:39.054 17:05:21 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:39.054 [2024-12-06 17:05:21.050038] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:39.054 [2024-12-06 17:05:21.050290] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60026 ] 00:05:39.054 [2024-12-06 17:05:21.198130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.054 [2024-12-06 17:05:21.230962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.990 test_start 00:05:39.990 oneshot 00:05:39.990 tick 100 00:05:39.990 tick 100 00:05:39.990 tick 250 00:05:39.990 tick 100 00:05:39.990 tick 100 00:05:39.990 tick 100 00:05:39.990 tick 250 00:05:39.990 tick 500 00:05:39.990 tick 100 00:05:39.990 tick 100 00:05:39.990 tick 250 00:05:39.990 tick 100 00:05:39.990 tick 100 00:05:39.990 test_end 00:05:39.990 00:05:39.990 real 0m1.239s 00:05:39.990 user 0m1.105s 00:05:39.990 sys 0m0.028s 00:05:39.990 17:05:22 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.990 ************************************ 00:05:39.990 END TEST event_reactor 00:05:39.990 ************************************ 00:05:39.990 17:05:22 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:40.248 17:05:22 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.248 17:05:22 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:40.248 17:05:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.248 17:05:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.248 ************************************ 00:05:40.248 START TEST event_reactor_perf 00:05:40.248 ************************************ 00:05:40.248 17:05:22 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:40.248 [2024-12-06 17:05:22.336352] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:40.248 [2024-12-06 17:05:22.336453] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60061 ] 00:05:40.248 [2024-12-06 17:05:22.480451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.248 [2024-12-06 17:05:22.513676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.625 test_start 00:05:41.625 test_end 00:05:41.625 Performance: 361972 events per second 00:05:41.625 00:05:41.625 real 0m1.234s 00:05:41.625 user 0m1.095s 00:05:41.625 sys 0m0.033s 00:05:41.625 17:05:23 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.625 17:05:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.625 ************************************ 00:05:41.625 END TEST event_reactor_perf 00:05:41.625 ************************************ 00:05:41.625 17:05:23 event -- event/event.sh@49 -- # uname -s 00:05:41.625 17:05:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:41.625 17:05:23 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:41.625 17:05:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.625 17:05:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.625 17:05:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.625 ************************************ 00:05:41.625 START TEST event_scheduler 00:05:41.625 ************************************ 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:41.625 * Looking for test storage... 00:05:41.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:41.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.625 17:05:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.625 --rc genhtml_branch_coverage=1 00:05:41.625 --rc genhtml_function_coverage=1 00:05:41.625 --rc genhtml_legend=1 00:05:41.625 --rc geninfo_all_blocks=1 00:05:41.625 --rc geninfo_unexecuted_blocks=1 00:05:41.625 00:05:41.625 ' 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.625 --rc genhtml_branch_coverage=1 00:05:41.625 --rc genhtml_function_coverage=1 00:05:41.625 --rc genhtml_legend=1 00:05:41.625 --rc geninfo_all_blocks=1 00:05:41.625 --rc geninfo_unexecuted_blocks=1 00:05:41.625 00:05:41.625 ' 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.625 --rc genhtml_branch_coverage=1 00:05:41.625 --rc genhtml_function_coverage=1 00:05:41.625 --rc genhtml_legend=1 00:05:41.625 --rc geninfo_all_blocks=1 00:05:41.625 --rc geninfo_unexecuted_blocks=1 00:05:41.625 00:05:41.625 ' 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.625 --rc genhtml_branch_coverage=1 00:05:41.625 --rc genhtml_function_coverage=1 00:05:41.625 --rc genhtml_legend=1 00:05:41.625 --rc geninfo_all_blocks=1 00:05:41.625 --rc geninfo_unexecuted_blocks=1 00:05:41.625 00:05:41.625 ' 00:05:41.625 17:05:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:41.625 17:05:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60131 00:05:41.625 17:05:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.625 17:05:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:41.625 17:05:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60131 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60131 ']' 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.625 17:05:23 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.626 17:05:23 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.626 17:05:23 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.626 17:05:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:41.626 [2024-12-06 17:05:23.843575] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:41.626 [2024-12-06 17:05:23.844456] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60131 ] 00:05:41.884 [2024-12-06 17:05:23.999247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.884 [2024-12-06 17:05:24.041805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.884 [2024-12-06 17:05:24.041915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.884 [2024-12-06 17:05:24.041919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.884 [2024-12-06 17:05:24.041869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.858 17:05:24 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.858 17:05:24 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:42.858 17:05:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:42.858 17:05:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.858 17:05:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.858 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.858 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.859 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.859 POWER: Cannot set governor of lcore 0 to performance 00:05:42.859 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.859 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.859 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:42.859 POWER: Cannot set governor of lcore 0 to userspace 00:05:42.859 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:42.859 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:42.859 POWER: Unable to set Power Management Environment for lcore 0 00:05:42.859 [2024-12-06 17:05:24.860811] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:42.859 [2024-12-06 17:05:24.860829] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:42.859 [2024-12-06 17:05:24.860844] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:42.859 [2024-12-06 17:05:24.860863] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:42.859 [2024-12-06 17:05:24.860873] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:42.859 [2024-12-06 17:05:24.860880] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:42.859 17:05:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:42.859 17:05:24 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 [2024-12-06 17:05:24.922255] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:42.859 17:05:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:42.859 17:05:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.859 17:05:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 ************************************ 00:05:42.859 START TEST scheduler_create_thread 00:05:42.859 ************************************ 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 2 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 3 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 4 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 5 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 6 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 7 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 8 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 9 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 10 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.859 17:05:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.288 17:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.288 17:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:44.288 17:05:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:44.288 17:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.288 17:05:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.665 ************************************ 00:05:45.665 END TEST scheduler_create_thread 00:05:45.665 ************************************ 00:05:45.665 17:05:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.665 00:05:45.665 real 0m2.617s 00:05:45.665 user 0m0.017s 00:05:45.665 sys 0m0.007s 00:05:45.665 17:05:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.665 17:05:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.665 17:05:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:45.665 17:05:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60131 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60131 ']' 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60131 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60131 00:05:45.665 killing process with pid 60131 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60131' 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60131 00:05:45.665 17:05:27 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60131 00:05:45.924 [2024-12-06 17:05:28.029839] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:45.924 ************************************ 00:05:45.924 END TEST event_scheduler 00:05:45.924 ************************************ 00:05:45.924 00:05:45.924 real 0m4.574s 00:05:45.924 user 0m8.878s 00:05:45.924 sys 0m0.338s 00:05:45.924 17:05:28 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.924 17:05:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:45.924 17:05:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:45.924 17:05:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:45.924 17:05:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.924 17:05:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.924 17:05:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.183 ************************************ 00:05:46.183 START TEST app_repeat 00:05:46.183 ************************************ 00:05:46.183 17:05:28 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:46.183 Process app_repeat pid: 60243 00:05:46.183 spdk_app_start Round 0 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60243 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60243' 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:46.183 17:05:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60243 /var/tmp/spdk-nbd.sock 00:05:46.183 17:05:28 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60243 ']' 00:05:46.183 17:05:28 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:46.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:46.183 17:05:28 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.183 17:05:28 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:46.183 17:05:28 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.183 17:05:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:46.183 [2024-12-06 17:05:28.254066] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:05:46.183 [2024-12-06 17:05:28.254144] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60243 ] 00:05:46.183 [2024-12-06 17:05:28.397797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:46.183 [2024-12-06 17:05:28.431951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.183 [2024-12-06 17:05:28.431963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.442 17:05:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.442 17:05:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:46.442 17:05:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.700 Malloc0 00:05:46.700 17:05:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.957 Malloc1 00:05:46.957 17:05:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.957 17:05:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:47.215 /dev/nbd0 00:05:47.473 17:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:47.473 17:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.473 1+0 records in 00:05:47.473 1+0 records out 00:05:47.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318642 s, 12.9 MB/s 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.473 17:05:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.473 17:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.473 17:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.473 17:05:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:47.732 /dev/nbd1 00:05:47.732 17:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:47.732 17:05:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:47.732 1+0 records in 00:05:47.732 1+0 records out 00:05:47.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302906 s, 13.5 MB/s 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:47.732 17:05:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:47.732 17:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:47.732 17:05:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:47.732 17:05:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.732 17:05:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.732 17:05:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.990 17:05:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.990 { 00:05:47.990 "bdev_name": "Malloc0", 00:05:47.990 "nbd_device": "/dev/nbd0" 00:05:47.990 }, 00:05:47.990 { 00:05:47.990 "bdev_name": "Malloc1", 00:05:47.990 "nbd_device": "/dev/nbd1" 00:05:47.990 } 00:05:47.990 ]' 00:05:47.991 17:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.991 { 00:05:47.991 "bdev_name": "Malloc0", 00:05:47.991 "nbd_device": "/dev/nbd0" 00:05:47.991 }, 00:05:47.991 { 00:05:47.991 "bdev_name": "Malloc1", 00:05:47.991 "nbd_device": "/dev/nbd1" 00:05:47.991 } 00:05:47.991 ]' 00:05:47.991 17:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.991 17:05:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.991 /dev/nbd1' 00:05:47.991 17:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.991 /dev/nbd1' 00:05:47.991 17:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.991 17:05:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.991 17:05:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.991 17:05:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.250 256+0 records in 00:05:48.250 256+0 records out 00:05:48.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101909 s, 103 MB/s 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.250 256+0 records in 00:05:48.250 256+0 records out 00:05:48.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220804 s, 47.5 MB/s 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.250 256+0 records in 00:05:48.250 256+0 records out 00:05:48.250 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0335069 s, 31.3 MB/s 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.250 17:05:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:48.508 17:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:48.508 17:05:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:48.509 17:05:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:48.509 17:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.509 17:05:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.509 17:05:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:48.509 17:05:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.509 17:05:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.509 17:05:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.509 17:05:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.767 17:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.334 17:05:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.334 17:05:31 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.594 17:05:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:49.594 [2024-12-06 17:05:31.844856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.594 [2024-12-06 17:05:31.878251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.594 [2024-12-06 17:05:31.878260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.852 [2024-12-06 17:05:31.907523] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:49.852 [2024-12-06 17:05:31.907581] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:53.190 17:05:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.190 spdk_app_start Round 1 00:05:53.190 17:05:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:53.190 17:05:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60243 /var/tmp/spdk-nbd.sock 00:05:53.190 17:05:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60243 ']' 00:05:53.190 17:05:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.190 17:05:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.190 17:05:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.190 17:05:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.190 17:05:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.190 17:05:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.190 17:05:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.190 17:05:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.190 Malloc0 00:05:53.190 17:05:35 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.447 Malloc1 00:05:53.447 17:05:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.447 17:05:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.704 /dev/nbd0 00:05:53.961 17:05:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.961 17:05:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.961 1+0 records in 00:05:53.961 1+0 records out 00:05:53.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023754 s, 17.2 MB/s 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.961 17:05:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.961 17:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.961 17:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.961 17:05:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.217 /dev/nbd1 00:05:54.217 17:05:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.217 17:05:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.217 1+0 records in 00:05:54.217 1+0 records out 00:05:54.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285067 s, 14.4 MB/s 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.217 17:05:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.217 17:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.217 17:05:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.217 17:05:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.217 17:05:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.217 17:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.475 { 00:05:54.475 "bdev_name": "Malloc0", 00:05:54.475 "nbd_device": "/dev/nbd0" 00:05:54.475 }, 00:05:54.475 { 00:05:54.475 "bdev_name": "Malloc1", 00:05:54.475 "nbd_device": "/dev/nbd1" 00:05:54.475 } 00:05:54.475 ]' 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.475 { 00:05:54.475 "bdev_name": "Malloc0", 00:05:54.475 "nbd_device": "/dev/nbd0" 00:05:54.475 }, 00:05:54.475 { 00:05:54.475 "bdev_name": "Malloc1", 00:05:54.475 "nbd_device": "/dev/nbd1" 00:05:54.475 } 00:05:54.475 ]' 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.475 /dev/nbd1' 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.475 /dev/nbd1' 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.475 256+0 records in 00:05:54.475 256+0 records out 00:05:54.475 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105124 s, 99.7 MB/s 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.475 17:05:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.732 256+0 records in 00:05:54.732 256+0 records out 00:05:54.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023029 s, 45.5 MB/s 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.732 256+0 records in 00:05:54.732 256+0 records out 00:05:54.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023058 s, 45.5 MB/s 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.732 17:05:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.990 17:05:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.248 17:05:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.248 17:05:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.248 17:05:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.248 17:05:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.248 17:05:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.248 17:05:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.248 17:05:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.248 17:05:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.504 17:05:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.504 17:05:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.504 17:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.762 17:05:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.762 17:05:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.020 17:05:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.278 [2024-12-06 17:05:38.333483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.278 [2024-12-06 17:05:38.370054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.278 [2024-12-06 17:05:38.370083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.278 [2024-12-06 17:05:38.400788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.278 [2024-12-06 17:05:38.400844] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.586 17:05:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.586 spdk_app_start Round 2 00:05:59.586 17:05:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:59.586 17:05:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60243 /var/tmp/spdk-nbd.sock 00:05:59.586 17:05:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60243 ']' 00:05:59.586 17:05:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.586 17:05:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.586 17:05:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.586 17:05:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.586 17:05:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.586 17:05:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.586 17:05:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.586 17:05:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.845 Malloc0 00:05:59.845 17:05:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.104 Malloc1 00:06:00.104 17:05:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.104 17:05:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.371 /dev/nbd0 00:06:00.371 17:05:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.371 17:05:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.371 1+0 records in 00:06:00.371 1+0 records out 00:06:00.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251716 s, 16.3 MB/s 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.371 17:05:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.371 17:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.371 17:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.371 17:05:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.642 /dev/nbd1 00:06:00.642 17:05:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.642 17:05:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.643 1+0 records in 00:06:00.643 1+0 records out 00:06:00.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388415 s, 10.5 MB/s 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.643 17:05:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.643 17:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.643 17:05:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.643 17:05:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.643 17:05:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.643 17:05:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.210 { 00:06:01.210 "bdev_name": "Malloc0", 00:06:01.210 "nbd_device": "/dev/nbd0" 00:06:01.210 }, 00:06:01.210 { 00:06:01.210 "bdev_name": "Malloc1", 00:06:01.210 "nbd_device": "/dev/nbd1" 00:06:01.210 } 00:06:01.210 ]' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.210 { 00:06:01.210 "bdev_name": "Malloc0", 00:06:01.210 "nbd_device": "/dev/nbd0" 00:06:01.210 }, 00:06:01.210 { 00:06:01.210 "bdev_name": "Malloc1", 00:06:01.210 "nbd_device": "/dev/nbd1" 00:06:01.210 } 00:06:01.210 ]' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.210 /dev/nbd1' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.210 /dev/nbd1' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.210 256+0 records in 00:06:01.210 256+0 records out 00:06:01.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010804 s, 97.1 MB/s 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.210 256+0 records in 00:06:01.210 256+0 records out 00:06:01.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228572 s, 45.9 MB/s 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.210 256+0 records in 00:06:01.210 256+0 records out 00:06:01.210 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276563 s, 37.9 MB/s 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.210 17:05:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.469 17:05:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.469 17:05:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.469 17:05:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.469 17:05:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.469 17:05:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.469 17:05:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.469 17:05:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.470 17:05:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.470 17:05:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.470 17:05:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.037 17:05:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.296 17:05:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.296 17:05:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.555 17:05:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.813 [2024-12-06 17:05:44.919349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.813 [2024-12-06 17:05:44.951931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.813 [2024-12-06 17:05:44.951941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.813 [2024-12-06 17:05:44.980931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.813 [2024-12-06 17:05:44.980993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:06.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.096 17:05:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60243 /var/tmp/spdk-nbd.sock 00:06:06.096 17:05:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60243 ']' 00:06:06.096 17:05:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.096 17:05:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.096 17:05:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.096 17:05:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.096 17:05:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:06.096 17:05:48 event.app_repeat -- event/event.sh@39 -- # killprocess 60243 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60243 ']' 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60243 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60243 00:06:06.096 killing process with pid 60243 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60243' 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60243 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60243 00:06:06.096 spdk_app_start is called in Round 0. 00:06:06.096 Shutdown signal received, stop current app iteration 00:06:06.096 Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 reinitialization... 00:06:06.096 spdk_app_start is called in Round 1. 00:06:06.096 Shutdown signal received, stop current app iteration 00:06:06.096 Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 reinitialization... 00:06:06.096 spdk_app_start is called in Round 2. 00:06:06.096 Shutdown signal received, stop current app iteration 00:06:06.096 Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 reinitialization... 00:06:06.096 spdk_app_start is called in Round 3. 00:06:06.096 Shutdown signal received, stop current app iteration 00:06:06.096 ************************************ 00:06:06.096 END TEST app_repeat 00:06:06.096 17:05:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:06.096 17:05:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:06.096 00:06:06.096 real 0m20.094s 00:06:06.096 user 0m46.700s 00:06:06.096 sys 0m3.092s 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.096 17:05:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:06.096 ************************************ 00:06:06.096 17:05:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:06.096 17:05:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.096 17:05:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.096 17:05:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.096 17:05:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.096 ************************************ 00:06:06.096 START TEST cpu_locks 00:06:06.096 ************************************ 00:06:06.096 17:05:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:06.354 * Looking for test storage... 00:06:06.354 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:06.354 17:05:48 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.355 17:05:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.355 --rc genhtml_branch_coverage=1 00:06:06.355 --rc genhtml_function_coverage=1 00:06:06.355 --rc genhtml_legend=1 00:06:06.355 --rc geninfo_all_blocks=1 00:06:06.355 --rc geninfo_unexecuted_blocks=1 00:06:06.355 00:06:06.355 ' 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.355 --rc genhtml_branch_coverage=1 00:06:06.355 --rc genhtml_function_coverage=1 00:06:06.355 --rc genhtml_legend=1 00:06:06.355 --rc geninfo_all_blocks=1 00:06:06.355 --rc geninfo_unexecuted_blocks=1 00:06:06.355 00:06:06.355 ' 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.355 --rc genhtml_branch_coverage=1 00:06:06.355 --rc genhtml_function_coverage=1 00:06:06.355 --rc genhtml_legend=1 00:06:06.355 --rc geninfo_all_blocks=1 00:06:06.355 --rc geninfo_unexecuted_blocks=1 00:06:06.355 00:06:06.355 ' 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.355 --rc genhtml_branch_coverage=1 00:06:06.355 --rc genhtml_function_coverage=1 00:06:06.355 --rc genhtml_legend=1 00:06:06.355 --rc geninfo_all_blocks=1 00:06:06.355 --rc geninfo_unexecuted_blocks=1 00:06:06.355 00:06:06.355 ' 00:06:06.355 17:05:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:06.355 17:05:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:06.355 17:05:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:06.355 17:05:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.355 17:05:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.355 ************************************ 00:06:06.355 START TEST default_locks 00:06:06.355 ************************************ 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60886 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60886 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60886 ']' 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.355 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.355 [2024-12-06 17:05:48.621460] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:06.355 [2024-12-06 17:05:48.621561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60886 ] 00:06:06.614 [2024-12-06 17:05:48.774246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.614 [2024-12-06 17:05:48.814790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.873 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.873 17:05:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:06.873 17:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60886 00:06:06.873 17:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60886 00:06:06.873 17:05:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60886 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60886 ']' 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60886 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60886 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.131 killing process with pid 60886 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60886' 00:06:07.131 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60886 00:06:07.132 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60886 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60886 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60886 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60886 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60886 ']' 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.390 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60886) - No such process 00:06:07.390 ERROR: process (pid: 60886) is no longer running 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.390 00:06:07.390 real 0m1.129s 00:06:07.390 user 0m1.196s 00:06:07.390 sys 0m0.450s 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.390 17:05:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.390 ************************************ 00:06:07.390 END TEST default_locks 00:06:07.390 ************************************ 00:06:07.650 17:05:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:07.650 17:05:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.650 17:05:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.650 17:05:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.650 ************************************ 00:06:07.650 START TEST default_locks_via_rpc 00:06:07.650 ************************************ 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60931 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60931 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60931 ']' 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.650 17:05:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.650 [2024-12-06 17:05:49.801509] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:07.650 [2024-12-06 17:05:49.801612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60931 ] 00:06:07.909 [2024-12-06 17:05:49.947475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.909 [2024-12-06 17:05:49.980610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60931 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60931 00:06:07.909 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60931 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60931 ']' 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60931 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60931 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.478 killing process with pid 60931 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60931' 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60931 00:06:08.478 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60931 00:06:08.737 00:06:08.737 real 0m1.238s 00:06:08.737 user 0m1.300s 00:06:08.737 sys 0m0.484s 00:06:08.737 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.737 17:05:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.737 ************************************ 00:06:08.737 END TEST default_locks_via_rpc 00:06:08.737 ************************************ 00:06:08.737 17:05:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:08.737 17:05:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.737 17:05:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.737 17:05:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.737 ************************************ 00:06:08.737 START TEST non_locking_app_on_locked_coremask 00:06:08.737 ************************************ 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60981 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60981 /var/tmp/spdk.sock 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60981 ']' 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.737 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.996 [2024-12-06 17:05:51.098866] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:08.996 [2024-12-06 17:05:51.098961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60981 ] 00:06:08.996 [2024-12-06 17:05:51.246582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.996 [2024-12-06 17:05:51.280157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61001 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61001 /var/tmp/spdk2.sock 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61001 ']' 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.259 17:05:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.259 [2024-12-06 17:05:51.541051] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:09.259 [2024-12-06 17:05:51.541176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61001 ] 00:06:09.529 [2024-12-06 17:05:51.709898] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.529 [2024-12-06 17:05:51.709954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.529 [2024-12-06 17:05:51.788590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.464 17:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.464 17:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.464 17:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60981 00:06:10.464 17:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60981 00:06:10.464 17:05:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60981 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60981 ']' 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60981 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60981 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.401 killing process with pid 60981 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60981' 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60981 00:06:11.401 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60981 00:06:11.968 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61001 00:06:11.968 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61001 ']' 00:06:11.968 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61001 00:06:11.968 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.968 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.968 17:05:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61001 00:06:11.968 17:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.968 17:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.968 killing process with pid 61001 00:06:11.968 17:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61001' 00:06:11.968 17:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61001 00:06:11.968 17:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61001 00:06:11.968 00:06:11.968 real 0m3.222s 00:06:11.968 user 0m3.922s 00:06:11.968 sys 0m0.928s 00:06:11.968 17:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.968 17:05:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.968 ************************************ 00:06:11.968 END TEST non_locking_app_on_locked_coremask 00:06:11.968 ************************************ 00:06:12.228 17:05:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.228 17:05:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.228 17:05:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.228 17:05:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.228 ************************************ 00:06:12.228 START TEST locking_app_on_unlocked_coremask 00:06:12.228 ************************************ 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61075 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61075 /var/tmp/spdk.sock 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61075 ']' 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.228 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.228 [2024-12-06 17:05:54.368857] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:12.228 [2024-12-06 17:05:54.368963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61075 ] 00:06:12.228 [2024-12-06 17:05:54.518515] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.228 [2024-12-06 17:05:54.518556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.487 [2024-12-06 17:05:54.552323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61089 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61089 /var/tmp/spdk2.sock 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61089 ']' 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.487 17:05:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.745 [2024-12-06 17:05:54.797930] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:12.745 [2024-12-06 17:05:54.798025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61089 ] 00:06:12.745 [2024-12-06 17:05:54.960229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.745 [2024-12-06 17:05:55.029676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.679 17:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.679 17:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.679 17:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61089 00:06:13.679 17:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61089 00:06:13.679 17:05:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61075 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61075 ']' 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61075 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61075 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.614 killing process with pid 61075 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61075' 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61075 00:06:14.614 17:05:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61075 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61089 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61089 ']' 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 61089 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61089 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.182 killing process with pid 61089 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61089' 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 61089 00:06:15.182 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 61089 00:06:15.440 00:06:15.440 real 0m3.272s 00:06:15.440 user 0m3.883s 00:06:15.440 sys 0m0.929s 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.440 ************************************ 00:06:15.440 END TEST locking_app_on_unlocked_coremask 00:06:15.440 ************************************ 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.440 17:05:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.440 17:05:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.440 17:05:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.440 17:05:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.440 ************************************ 00:06:15.440 START TEST locking_app_on_locked_coremask 00:06:15.440 ************************************ 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61164 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61164 /var/tmp/spdk.sock 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61164 ']' 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.440 17:05:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.440 [2024-12-06 17:05:57.682853] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:15.441 [2024-12-06 17:05:57.682939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61164 ] 00:06:15.698 [2024-12-06 17:05:57.829308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.698 [2024-12-06 17:05:57.868857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.955 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61177 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61177 /var/tmp/spdk2.sock 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61177 /var/tmp/spdk2.sock 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61177 /var/tmp/spdk2.sock 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61177 ']' 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.956 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.956 [2024-12-06 17:05:58.125851] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:15.956 [2024-12-06 17:05:58.125960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61177 ] 00:06:16.214 [2024-12-06 17:05:58.287420] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61164 has claimed it. 00:06:16.214 [2024-12-06 17:05:58.287488] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.781 ERROR: process (pid: 61177) is no longer running 00:06:16.781 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61177) - No such process 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61164 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.781 17:05:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61164 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61164 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61164 ']' 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61164 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61164 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.349 killing process with pid 61164 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61164' 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61164 00:06:17.349 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61164 00:06:17.608 00:06:17.608 real 0m2.060s 00:06:17.608 user 0m2.493s 00:06:17.608 sys 0m0.523s 00:06:17.608 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.608 ************************************ 00:06:17.608 17:05:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.608 END TEST locking_app_on_locked_coremask 00:06:17.608 ************************************ 00:06:17.608 17:05:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:17.608 17:05:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.608 17:05:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.608 17:05:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.608 ************************************ 00:06:17.608 START TEST locking_overlapped_coremask 00:06:17.608 ************************************ 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61233 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61233 /var/tmp/spdk.sock 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61233 ']' 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.608 17:05:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.608 [2024-12-06 17:05:59.809839] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:17.608 [2024-12-06 17:05:59.809944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61233 ] 00:06:17.867 [2024-12-06 17:05:59.967676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.867 [2024-12-06 17:06:00.012782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.867 [2024-12-06 17:06:00.012836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.867 [2024-12-06 17:06:00.012845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61245 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61245 /var/tmp/spdk2.sock 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61245 /var/tmp/spdk2.sock 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61245 /var/tmp/spdk2.sock 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61245 ']' 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.131 17:06:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.131 [2024-12-06 17:06:00.286790] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:18.131 [2024-12-06 17:06:00.286912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61245 ] 00:06:18.397 [2024-12-06 17:06:00.454256] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61233 has claimed it. 00:06:18.397 [2024-12-06 17:06:00.458336] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.964 ERROR: process (pid: 61245) is no longer running 00:06:18.964 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61245) - No such process 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61233 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61233 ']' 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61233 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61233 00:06:18.964 killing process with pid 61233 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.964 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61233' 00:06:18.965 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61233 00:06:18.965 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61233 00:06:19.224 00:06:19.224 real 0m1.608s 00:06:19.224 user 0m4.463s 00:06:19.224 sys 0m0.347s 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.224 ************************************ 00:06:19.224 END TEST locking_overlapped_coremask 00:06:19.224 ************************************ 00:06:19.224 17:06:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:19.224 17:06:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.224 17:06:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.224 17:06:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.224 ************************************ 00:06:19.224 START TEST locking_overlapped_coremask_via_rpc 00:06:19.224 ************************************ 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61297 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61297 /var/tmp/spdk.sock 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61297 ']' 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.224 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.224 [2024-12-06 17:06:01.470165] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:19.224 [2024-12-06 17:06:01.470798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61297 ] 00:06:19.483 [2024-12-06 17:06:01.620333] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.483 [2024-12-06 17:06:01.620393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.483 [2024-12-06 17:06:01.656850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.483 [2024-12-06 17:06:01.660412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.483 [2024-12-06 17:06:01.660431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61313 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61313 /var/tmp/spdk2.sock 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61313 ']' 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.742 17:06:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.742 [2024-12-06 17:06:01.928356] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:19.742 [2024-12-06 17:06:01.928661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61313 ] 00:06:20.000 [2024-12-06 17:06:02.095631] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.000 [2024-12-06 17:06:02.095704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:20.000 [2024-12-06 17:06:02.166806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:20.000 [2024-12-06 17:06:02.166879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:20.000 [2024-12-06 17:06:02.166880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.259 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.259 [2024-12-06 17:06:02.480431] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61297 has claimed it. 00:06:20.259 2024/12/06 17:06:02 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:20.259 request: 00:06:20.259 { 00:06:20.259 "method": "framework_enable_cpumask_locks", 00:06:20.259 "params": {} 00:06:20.259 } 00:06:20.259 Got JSON-RPC error response 00:06:20.259 GoRPCClient: error on JSON-RPC call 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61297 /var/tmp/spdk.sock 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61297 ']' 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.260 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61313 /var/tmp/spdk2.sock 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61313 ']' 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.827 17:06:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.827 17:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.827 17:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.827 17:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:20.827 17:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.827 17:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.827 17:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.827 00:06:20.827 real 0m1.703s 00:06:20.827 user 0m1.120s 00:06:20.827 sys 0m0.166s 00:06:20.827 17:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.827 17:06:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.827 ************************************ 00:06:20.827 END TEST locking_overlapped_coremask_via_rpc 00:06:20.827 ************************************ 00:06:21.087 17:06:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:21.087 17:06:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61297 ]] 00:06:21.087 17:06:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61297 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61297 ']' 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61297 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61297 00:06:21.087 killing process with pid 61297 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61297' 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61297 00:06:21.087 17:06:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61297 00:06:21.345 17:06:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61313 ]] 00:06:21.345 17:06:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61313 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61313 ']' 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61313 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61313 00:06:21.345 killing process with pid 61313 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61313' 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61313 00:06:21.345 17:06:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61313 00:06:21.604 17:06:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.604 Process with pid 61297 is not found 00:06:21.604 17:06:03 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:21.604 17:06:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61297 ]] 00:06:21.604 17:06:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61297 00:06:21.604 17:06:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61297 ']' 00:06:21.604 17:06:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61297 00:06:21.604 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61297) - No such process 00:06:21.604 17:06:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61297 is not found' 00:06:21.604 17:06:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61313 ]] 00:06:21.604 17:06:03 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61313 00:06:21.604 17:06:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61313 ']' 00:06:21.604 17:06:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61313 00:06:21.604 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61313) - No such process 00:06:21.604 Process with pid 61313 is not found 00:06:21.604 17:06:03 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61313 is not found' 00:06:21.604 17:06:03 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:21.604 00:06:21.604 real 0m15.358s 00:06:21.604 user 0m26.908s 00:06:21.604 sys 0m4.531s 00:06:21.604 17:06:03 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.604 17:06:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.604 ************************************ 00:06:21.604 END TEST cpu_locks 00:06:21.604 ************************************ 00:06:21.604 00:06:21.604 real 0m44.210s 00:06:21.604 user 1m28.978s 00:06:21.604 sys 0m8.306s 00:06:21.604 17:06:03 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.604 17:06:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.604 ************************************ 00:06:21.604 END TEST event 00:06:21.604 ************************************ 00:06:21.604 17:06:03 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.604 17:06:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.604 17:06:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.604 17:06:03 -- common/autotest_common.sh@10 -- # set +x 00:06:21.604 ************************************ 00:06:21.604 START TEST thread 00:06:21.604 ************************************ 00:06:21.604 17:06:03 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:21.604 * Looking for test storage... 00:06:21.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:21.604 17:06:03 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:21.862 17:06:03 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:21.862 17:06:03 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:21.862 17:06:03 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:21.862 17:06:03 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.862 17:06:03 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.862 17:06:03 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.862 17:06:03 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.862 17:06:03 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.862 17:06:03 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.862 17:06:03 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.862 17:06:03 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.862 17:06:03 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.862 17:06:03 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.862 17:06:03 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.862 17:06:03 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:21.862 17:06:03 thread -- scripts/common.sh@345 -- # : 1 00:06:21.862 17:06:03 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.862 17:06:03 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.862 17:06:03 thread -- scripts/common.sh@365 -- # decimal 1 00:06:21.862 17:06:03 thread -- scripts/common.sh@353 -- # local d=1 00:06:21.862 17:06:03 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.862 17:06:03 thread -- scripts/common.sh@355 -- # echo 1 00:06:21.862 17:06:03 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.862 17:06:03 thread -- scripts/common.sh@366 -- # decimal 2 00:06:21.862 17:06:03 thread -- scripts/common.sh@353 -- # local d=2 00:06:21.862 17:06:03 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.862 17:06:03 thread -- scripts/common.sh@355 -- # echo 2 00:06:21.862 17:06:03 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.862 17:06:03 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.862 17:06:03 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.862 17:06:03 thread -- scripts/common.sh@368 -- # return 0 00:06:21.862 17:06:03 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.862 17:06:03 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:21.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.862 --rc genhtml_branch_coverage=1 00:06:21.862 --rc genhtml_function_coverage=1 00:06:21.862 --rc genhtml_legend=1 00:06:21.862 --rc geninfo_all_blocks=1 00:06:21.862 --rc geninfo_unexecuted_blocks=1 00:06:21.862 00:06:21.862 ' 00:06:21.862 17:06:03 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:21.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.862 --rc genhtml_branch_coverage=1 00:06:21.862 --rc genhtml_function_coverage=1 00:06:21.862 --rc genhtml_legend=1 00:06:21.862 --rc geninfo_all_blocks=1 00:06:21.862 --rc geninfo_unexecuted_blocks=1 00:06:21.862 00:06:21.862 ' 00:06:21.862 17:06:03 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:21.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.862 --rc genhtml_branch_coverage=1 00:06:21.862 --rc genhtml_function_coverage=1 00:06:21.863 --rc genhtml_legend=1 00:06:21.863 --rc geninfo_all_blocks=1 00:06:21.863 --rc geninfo_unexecuted_blocks=1 00:06:21.863 00:06:21.863 ' 00:06:21.863 17:06:03 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:21.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.863 --rc genhtml_branch_coverage=1 00:06:21.863 --rc genhtml_function_coverage=1 00:06:21.863 --rc genhtml_legend=1 00:06:21.863 --rc geninfo_all_blocks=1 00:06:21.863 --rc geninfo_unexecuted_blocks=1 00:06:21.863 00:06:21.863 ' 00:06:21.863 17:06:03 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.863 17:06:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:21.863 17:06:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.863 17:06:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.863 ************************************ 00:06:21.863 START TEST thread_poller_perf 00:06:21.863 ************************************ 00:06:21.863 17:06:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:21.863 [2024-12-06 17:06:04.024643] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:21.863 [2024-12-06 17:06:04.024761] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61454 ] 00:06:22.121 [2024-12-06 17:06:04.176800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.121 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:22.121 [2024-12-06 17:06:04.220427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.055 [2024-12-06T17:06:05.353Z] ====================================== 00:06:23.055 [2024-12-06T17:06:05.353Z] busy:2211695952 (cyc) 00:06:23.055 [2024-12-06T17:06:05.353Z] total_run_count: 287000 00:06:23.055 [2024-12-06T17:06:05.353Z] tsc_hz: 2200000000 (cyc) 00:06:23.055 [2024-12-06T17:06:05.353Z] ====================================== 00:06:23.055 [2024-12-06T17:06:05.353Z] poller_cost: 7706 (cyc), 3502 (nsec) 00:06:23.055 00:06:23.055 real 0m1.266s 00:06:23.055 user 0m1.114s 00:06:23.055 sys 0m0.045s 00:06:23.055 17:06:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.055 ************************************ 00:06:23.055 END TEST thread_poller_perf 00:06:23.055 ************************************ 00:06:23.055 17:06:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:23.055 17:06:05 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.055 17:06:05 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:23.055 17:06:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.055 17:06:05 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.055 ************************************ 00:06:23.055 START TEST thread_poller_perf 00:06:23.055 ************************************ 00:06:23.055 17:06:05 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:23.055 [2024-12-06 17:06:05.344149] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:23.055 [2024-12-06 17:06:05.344238] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61484 ] 00:06:23.312 [2024-12-06 17:06:05.496165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.312 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:23.312 [2024-12-06 17:06:05.535903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.691 [2024-12-06T17:06:06.989Z] ====================================== 00:06:24.691 [2024-12-06T17:06:06.989Z] busy:2202254234 (cyc) 00:06:24.691 [2024-12-06T17:06:06.989Z] total_run_count: 3375000 00:06:24.691 [2024-12-06T17:06:06.989Z] tsc_hz: 2200000000 (cyc) 00:06:24.691 [2024-12-06T17:06:06.989Z] ====================================== 00:06:24.691 [2024-12-06T17:06:06.989Z] poller_cost: 652 (cyc), 296 (nsec) 00:06:24.691 00:06:24.691 real 0m1.253s 00:06:24.691 user 0m1.106s 00:06:24.691 sys 0m0.040s 00:06:24.691 17:06:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.691 ************************************ 00:06:24.691 END TEST thread_poller_perf 00:06:24.691 ************************************ 00:06:24.691 17:06:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.691 17:06:06 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:24.691 ************************************ 00:06:24.691 END TEST thread 00:06:24.691 ************************************ 00:06:24.691 00:06:24.691 real 0m2.807s 00:06:24.691 user 0m2.376s 00:06:24.691 sys 0m0.211s 00:06:24.691 17:06:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.691 17:06:06 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.691 17:06:06 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:24.691 17:06:06 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:24.691 17:06:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.691 17:06:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.691 17:06:06 -- common/autotest_common.sh@10 -- # set +x 00:06:24.691 ************************************ 00:06:24.691 START TEST app_cmdline 00:06:24.691 ************************************ 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:24.691 * Looking for test storage... 00:06:24.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.691 17:06:06 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:24.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.691 --rc genhtml_branch_coverage=1 00:06:24.691 --rc genhtml_function_coverage=1 00:06:24.691 --rc genhtml_legend=1 00:06:24.691 --rc geninfo_all_blocks=1 00:06:24.691 --rc geninfo_unexecuted_blocks=1 00:06:24.691 00:06:24.691 ' 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:24.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.691 --rc genhtml_branch_coverage=1 00:06:24.691 --rc genhtml_function_coverage=1 00:06:24.691 --rc genhtml_legend=1 00:06:24.691 --rc geninfo_all_blocks=1 00:06:24.691 --rc geninfo_unexecuted_blocks=1 00:06:24.691 00:06:24.691 ' 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:24.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.691 --rc genhtml_branch_coverage=1 00:06:24.691 --rc genhtml_function_coverage=1 00:06:24.691 --rc genhtml_legend=1 00:06:24.691 --rc geninfo_all_blocks=1 00:06:24.691 --rc geninfo_unexecuted_blocks=1 00:06:24.691 00:06:24.691 ' 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:24.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.691 --rc genhtml_branch_coverage=1 00:06:24.691 --rc genhtml_function_coverage=1 00:06:24.691 --rc genhtml_legend=1 00:06:24.691 --rc geninfo_all_blocks=1 00:06:24.691 --rc geninfo_unexecuted_blocks=1 00:06:24.691 00:06:24.691 ' 00:06:24.691 17:06:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:24.691 17:06:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61567 00:06:24.691 17:06:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61567 00:06:24.691 17:06:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61567 ']' 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.691 17:06:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.691 [2024-12-06 17:06:06.942605] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:24.691 [2024-12-06 17:06:06.942728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61567 ] 00:06:24.951 [2024-12-06 17:06:07.094415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.951 [2024-12-06 17:06:07.136708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.209 17:06:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.209 17:06:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:25.209 17:06:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:25.469 { 00:06:25.469 "fields": { 00:06:25.469 "commit": "c7acbd6be", 00:06:25.469 "major": 25, 00:06:25.469 "minor": 1, 00:06:25.469 "patch": 0, 00:06:25.469 "suffix": "-pre" 00:06:25.469 }, 00:06:25.469 "version": "SPDK v25.01-pre git sha1 c7acbd6be" 00:06:25.469 } 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:25.469 17:06:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:25.469 17:06:07 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:25.729 2024/12/06 17:06:07 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:25.729 request: 00:06:25.729 { 00:06:25.729 "method": "env_dpdk_get_mem_stats", 00:06:25.729 "params": {} 00:06:25.729 } 00:06:25.729 Got JSON-RPC error response 00:06:25.729 GoRPCClient: error on JSON-RPC call 00:06:25.729 17:06:08 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:25.729 17:06:08 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.729 17:06:08 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.729 17:06:08 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.729 17:06:08 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61567 00:06:25.729 17:06:08 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61567 ']' 00:06:25.729 17:06:08 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61567 00:06:25.729 17:06:08 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:25.729 17:06:08 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:25.990 17:06:08 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61567 00:06:25.990 17:06:08 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:25.990 killing process with pid 61567 00:06:25.990 17:06:08 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:25.990 17:06:08 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61567' 00:06:25.990 17:06:08 app_cmdline -- common/autotest_common.sh@973 -- # kill 61567 00:06:25.990 17:06:08 app_cmdline -- common/autotest_common.sh@978 -- # wait 61567 00:06:26.249 00:06:26.249 real 0m1.613s 00:06:26.249 user 0m2.162s 00:06:26.249 sys 0m0.403s 00:06:26.249 17:06:08 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.249 ************************************ 00:06:26.249 END TEST app_cmdline 00:06:26.249 ************************************ 00:06:26.249 17:06:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.249 17:06:08 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:26.249 17:06:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.249 17:06:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.249 17:06:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.249 ************************************ 00:06:26.249 START TEST version 00:06:26.249 ************************************ 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:26.249 * Looking for test storage... 00:06:26.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.249 17:06:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.249 17:06:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.249 17:06:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.249 17:06:08 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.249 17:06:08 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.249 17:06:08 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.249 17:06:08 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.249 17:06:08 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.249 17:06:08 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.249 17:06:08 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.249 17:06:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.249 17:06:08 version -- scripts/common.sh@344 -- # case "$op" in 00:06:26.249 17:06:08 version -- scripts/common.sh@345 -- # : 1 00:06:26.249 17:06:08 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.249 17:06:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.249 17:06:08 version -- scripts/common.sh@365 -- # decimal 1 00:06:26.249 17:06:08 version -- scripts/common.sh@353 -- # local d=1 00:06:26.249 17:06:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.249 17:06:08 version -- scripts/common.sh@355 -- # echo 1 00:06:26.249 17:06:08 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.249 17:06:08 version -- scripts/common.sh@366 -- # decimal 2 00:06:26.249 17:06:08 version -- scripts/common.sh@353 -- # local d=2 00:06:26.249 17:06:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.249 17:06:08 version -- scripts/common.sh@355 -- # echo 2 00:06:26.249 17:06:08 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.249 17:06:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.249 17:06:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.249 17:06:08 version -- scripts/common.sh@368 -- # return 0 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.249 --rc genhtml_branch_coverage=1 00:06:26.249 --rc genhtml_function_coverage=1 00:06:26.249 --rc genhtml_legend=1 00:06:26.249 --rc geninfo_all_blocks=1 00:06:26.249 --rc geninfo_unexecuted_blocks=1 00:06:26.249 00:06:26.249 ' 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.249 --rc genhtml_branch_coverage=1 00:06:26.249 --rc genhtml_function_coverage=1 00:06:26.249 --rc genhtml_legend=1 00:06:26.249 --rc geninfo_all_blocks=1 00:06:26.249 --rc geninfo_unexecuted_blocks=1 00:06:26.249 00:06:26.249 ' 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.249 --rc genhtml_branch_coverage=1 00:06:26.249 --rc genhtml_function_coverage=1 00:06:26.249 --rc genhtml_legend=1 00:06:26.249 --rc geninfo_all_blocks=1 00:06:26.249 --rc geninfo_unexecuted_blocks=1 00:06:26.249 00:06:26.249 ' 00:06:26.249 17:06:08 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.249 --rc genhtml_branch_coverage=1 00:06:26.249 --rc genhtml_function_coverage=1 00:06:26.249 --rc genhtml_legend=1 00:06:26.249 --rc geninfo_all_blocks=1 00:06:26.249 --rc geninfo_unexecuted_blocks=1 00:06:26.249 00:06:26.249 ' 00:06:26.249 17:06:08 version -- app/version.sh@17 -- # get_header_version major 00:06:26.249 17:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.249 17:06:08 version -- app/version.sh@14 -- # cut -f2 00:06:26.249 17:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.249 17:06:08 version -- app/version.sh@17 -- # major=25 00:06:26.249 17:06:08 version -- app/version.sh@18 -- # get_header_version minor 00:06:26.249 17:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.249 17:06:08 version -- app/version.sh@14 -- # cut -f2 00:06:26.249 17:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.249 17:06:08 version -- app/version.sh@18 -- # minor=1 00:06:26.249 17:06:08 version -- app/version.sh@19 -- # get_header_version patch 00:06:26.249 17:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.249 17:06:08 version -- app/version.sh@14 -- # cut -f2 00:06:26.249 17:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.249 17:06:08 version -- app/version.sh@19 -- # patch=0 00:06:26.250 17:06:08 version -- app/version.sh@20 -- # get_header_version suffix 00:06:26.250 17:06:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:26.250 17:06:08 version -- app/version.sh@14 -- # cut -f2 00:06:26.250 17:06:08 version -- app/version.sh@14 -- # tr -d '"' 00:06:26.509 17:06:08 version -- app/version.sh@20 -- # suffix=-pre 00:06:26.509 17:06:08 version -- app/version.sh@22 -- # version=25.1 00:06:26.509 17:06:08 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:26.509 17:06:08 version -- app/version.sh@28 -- # version=25.1rc0 00:06:26.509 17:06:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:26.509 17:06:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:26.509 17:06:08 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:26.509 17:06:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:26.509 00:06:26.509 real 0m0.250s 00:06:26.509 user 0m0.165s 00:06:26.509 sys 0m0.119s 00:06:26.509 17:06:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.509 17:06:08 version -- common/autotest_common.sh@10 -- # set +x 00:06:26.509 ************************************ 00:06:26.509 END TEST version 00:06:26.509 ************************************ 00:06:26.509 17:06:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:26.509 17:06:08 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:26.509 17:06:08 -- spdk/autotest.sh@194 -- # uname -s 00:06:26.509 17:06:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:26.509 17:06:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:26.509 17:06:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:26.509 17:06:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:26.509 17:06:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:26.509 17:06:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:26.509 17:06:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.509 17:06:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.509 17:06:08 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:26.509 17:06:08 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:26.509 17:06:08 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:26.509 17:06:08 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:26.509 17:06:08 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:26.509 17:06:08 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:26.509 17:06:08 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:26.509 17:06:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.509 17:06:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.509 17:06:08 -- common/autotest_common.sh@10 -- # set +x 00:06:26.509 ************************************ 00:06:26.509 START TEST nvmf_tcp 00:06:26.509 ************************************ 00:06:26.509 17:06:08 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:26.509 * Looking for test storage... 00:06:26.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:26.509 17:06:08 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.509 17:06:08 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.509 17:06:08 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.769 17:06:08 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.769 17:06:08 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.770 17:06:08 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:26.770 17:06:08 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.770 17:06:08 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.770 --rc genhtml_branch_coverage=1 00:06:26.770 --rc genhtml_function_coverage=1 00:06:26.770 --rc genhtml_legend=1 00:06:26.770 --rc geninfo_all_blocks=1 00:06:26.770 --rc geninfo_unexecuted_blocks=1 00:06:26.770 00:06:26.770 ' 00:06:26.770 17:06:08 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.770 --rc genhtml_branch_coverage=1 00:06:26.770 --rc genhtml_function_coverage=1 00:06:26.770 --rc genhtml_legend=1 00:06:26.770 --rc geninfo_all_blocks=1 00:06:26.770 --rc geninfo_unexecuted_blocks=1 00:06:26.770 00:06:26.770 ' 00:06:26.770 17:06:08 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.770 --rc genhtml_branch_coverage=1 00:06:26.770 --rc genhtml_function_coverage=1 00:06:26.770 --rc genhtml_legend=1 00:06:26.770 --rc geninfo_all_blocks=1 00:06:26.770 --rc geninfo_unexecuted_blocks=1 00:06:26.770 00:06:26.770 ' 00:06:26.770 17:06:08 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.770 --rc genhtml_branch_coverage=1 00:06:26.770 --rc genhtml_function_coverage=1 00:06:26.770 --rc genhtml_legend=1 00:06:26.770 --rc geninfo_all_blocks=1 00:06:26.770 --rc geninfo_unexecuted_blocks=1 00:06:26.770 00:06:26.770 ' 00:06:26.770 17:06:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:26.770 17:06:08 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:26.770 17:06:08 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:26.770 17:06:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.770 17:06:08 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.770 17:06:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.770 ************************************ 00:06:26.770 START TEST nvmf_target_core 00:06:26.770 ************************************ 00:06:26.770 17:06:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:26.770 * Looking for test storage... 00:06:26.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:26.770 17:06:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.770 17:06:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.770 17:06:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.770 --rc genhtml_branch_coverage=1 00:06:26.770 --rc genhtml_function_coverage=1 00:06:26.770 --rc genhtml_legend=1 00:06:26.770 --rc geninfo_all_blocks=1 00:06:26.770 --rc geninfo_unexecuted_blocks=1 00:06:26.770 00:06:26.770 ' 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.770 --rc genhtml_branch_coverage=1 00:06:26.770 --rc genhtml_function_coverage=1 00:06:26.770 --rc genhtml_legend=1 00:06:26.770 --rc geninfo_all_blocks=1 00:06:26.770 --rc geninfo_unexecuted_blocks=1 00:06:26.770 00:06:26.770 ' 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.770 --rc genhtml_branch_coverage=1 00:06:26.770 --rc genhtml_function_coverage=1 00:06:26.770 --rc genhtml_legend=1 00:06:26.770 --rc geninfo_all_blocks=1 00:06:26.770 --rc geninfo_unexecuted_blocks=1 00:06:26.770 00:06:26.770 ' 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.770 --rc genhtml_branch_coverage=1 00:06:26.770 --rc genhtml_function_coverage=1 00:06:26.770 --rc genhtml_legend=1 00:06:26.770 --rc geninfo_all_blocks=1 00:06:26.770 --rc geninfo_unexecuted_blocks=1 00:06:26.770 00:06:26.770 ' 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.770 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.031 17:06:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.032 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:27.032 ************************************ 00:06:27.032 START TEST nvmf_abort 00:06:27.032 ************************************ 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:27.032 * Looking for test storage... 00:06:27.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.032 --rc genhtml_branch_coverage=1 00:06:27.032 --rc genhtml_function_coverage=1 00:06:27.032 --rc genhtml_legend=1 00:06:27.032 --rc geninfo_all_blocks=1 00:06:27.032 --rc geninfo_unexecuted_blocks=1 00:06:27.032 00:06:27.032 ' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.032 --rc genhtml_branch_coverage=1 00:06:27.032 --rc genhtml_function_coverage=1 00:06:27.032 --rc genhtml_legend=1 00:06:27.032 --rc geninfo_all_blocks=1 00:06:27.032 --rc geninfo_unexecuted_blocks=1 00:06:27.032 00:06:27.032 ' 00:06:27.032 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.032 --rc genhtml_branch_coverage=1 00:06:27.032 --rc genhtml_function_coverage=1 00:06:27.032 --rc genhtml_legend=1 00:06:27.032 --rc geninfo_all_blocks=1 00:06:27.032 --rc geninfo_unexecuted_blocks=1 00:06:27.033 00:06:27.033 ' 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.033 --rc genhtml_branch_coverage=1 00:06:27.033 --rc genhtml_function_coverage=1 00:06:27.033 --rc genhtml_legend=1 00:06:27.033 --rc geninfo_all_blocks=1 00:06:27.033 --rc geninfo_unexecuted_blocks=1 00:06:27.033 00:06:27.033 ' 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.033 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:27.033 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:27.034 Cannot find device "nvmf_init_br" 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:27.034 Cannot find device "nvmf_init_br2" 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:27.034 Cannot find device "nvmf_tgt_br" 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:27.034 Cannot find device "nvmf_tgt_br2" 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:06:27.034 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:27.293 Cannot find device "nvmf_init_br" 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:27.293 Cannot find device "nvmf_init_br2" 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:27.293 Cannot find device "nvmf_tgt_br" 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:27.293 Cannot find device "nvmf_tgt_br2" 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:27.293 Cannot find device "nvmf_br" 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:27.293 Cannot find device "nvmf_init_if" 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:27.293 Cannot find device "nvmf_init_if2" 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:27.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:27.293 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:27.293 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:27.554 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:27.554 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:06:27.554 00:06:27.554 --- 10.0.0.3 ping statistics --- 00:06:27.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.554 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:27.554 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:27.554 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:06:27.554 00:06:27.554 --- 10.0.0.4 ping statistics --- 00:06:27.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.554 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:27.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:06:27.554 00:06:27.554 --- 10.0.0.1 ping statistics --- 00:06:27.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.554 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:27.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:06:27.554 00:06:27.554 --- 10.0.0.2 ping statistics --- 00:06:27.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.554 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=61997 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 61997 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 61997 ']' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.554 17:06:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:27.554 [2024-12-06 17:06:09.807734] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:27.554 [2024-12-06 17:06:09.807838] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.813 [2024-12-06 17:06:09.961111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.813 [2024-12-06 17:06:10.001284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.813 [2024-12-06 17:06:10.001374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.813 [2024-12-06 17:06:10.001395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.813 [2024-12-06 17:06:10.001415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.813 [2024-12-06 17:06:10.001428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.813 [2024-12-06 17:06:10.002533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.813 [2024-12-06 17:06:10.002621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.813 [2024-12-06 17:06:10.002630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.813 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.813 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:27.813 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:27.813 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.813 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 [2024-12-06 17:06:10.149408] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 Malloc0 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 Delay0 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 [2024-12-06 17:06:10.215660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.072 17:06:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:28.330 [2024-12-06 17:06:10.399786] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:30.234 Initializing NVMe Controllers 00:06:30.234 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:06:30.234 controller IO queue size 128 less than required 00:06:30.234 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:30.234 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:30.234 Initialization complete. Launching workers. 00:06:30.234 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27017 00:06:30.234 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27078, failed to submit 62 00:06:30.234 success 27021, unsuccessful 57, failed 0 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:30.234 rmmod nvme_tcp 00:06:30.234 rmmod nvme_fabrics 00:06:30.234 rmmod nvme_keyring 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 61997 ']' 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 61997 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 61997 ']' 00:06:30.234 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 61997 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61997 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:30.493 killing process with pid 61997 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61997' 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 61997 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 61997 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:30.493 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:06:30.753 ************************************ 00:06:30.753 END TEST nvmf_abort 00:06:30.753 ************************************ 00:06:30.753 00:06:30.753 real 0m3.887s 00:06:30.753 user 0m10.142s 00:06:30.753 sys 0m0.976s 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.753 17:06:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.753 17:06:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:30.753 17:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.753 17:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.753 17:06:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.753 ************************************ 00:06:30.753 START TEST nvmf_ns_hotplug_stress 00:06:30.753 ************************************ 00:06:30.753 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:31.014 * Looking for test storage... 00:06:31.014 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.014 --rc genhtml_branch_coverage=1 00:06:31.014 --rc genhtml_function_coverage=1 00:06:31.014 --rc genhtml_legend=1 00:06:31.014 --rc geninfo_all_blocks=1 00:06:31.014 --rc geninfo_unexecuted_blocks=1 00:06:31.014 00:06:31.014 ' 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.014 --rc genhtml_branch_coverage=1 00:06:31.014 --rc genhtml_function_coverage=1 00:06:31.014 --rc genhtml_legend=1 00:06:31.014 --rc geninfo_all_blocks=1 00:06:31.014 --rc geninfo_unexecuted_blocks=1 00:06:31.014 00:06:31.014 ' 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.014 --rc genhtml_branch_coverage=1 00:06:31.014 --rc genhtml_function_coverage=1 00:06:31.014 --rc genhtml_legend=1 00:06:31.014 --rc geninfo_all_blocks=1 00:06:31.014 --rc geninfo_unexecuted_blocks=1 00:06:31.014 00:06:31.014 ' 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.014 --rc genhtml_branch_coverage=1 00:06:31.014 --rc genhtml_function_coverage=1 00:06:31.014 --rc genhtml_legend=1 00:06:31.014 --rc geninfo_all_blocks=1 00:06:31.014 --rc geninfo_unexecuted_blocks=1 00:06:31.014 00:06:31.014 ' 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:31.014 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.015 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:31.015 Cannot find device "nvmf_init_br" 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:31.015 Cannot find device "nvmf_init_br2" 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:31.015 Cannot find device "nvmf_tgt_br" 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:31.015 Cannot find device "nvmf_tgt_br2" 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:31.015 Cannot find device "nvmf_init_br" 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:06:31.015 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:31.274 Cannot find device "nvmf_init_br2" 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:31.274 Cannot find device "nvmf_tgt_br" 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:31.274 Cannot find device "nvmf_tgt_br2" 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:31.274 Cannot find device "nvmf_br" 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:31.274 Cannot find device "nvmf_init_if" 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:31.274 Cannot find device "nvmf_init_if2" 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:06:31.274 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:31.274 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:31.275 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:31.275 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:31.534 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:31.534 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:06:31.534 00:06:31.534 --- 10.0.0.3 ping statistics --- 00:06:31.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.534 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:31.534 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:31.534 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:06:31.534 00:06:31.534 --- 10.0.0.4 ping statistics --- 00:06:31.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.534 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:31.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:31.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:06:31.534 00:06:31.534 --- 10.0.0.1 ping statistics --- 00:06:31.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.534 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:31.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:31.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:06:31.534 00:06:31.534 --- 10.0.0.2 ping statistics --- 00:06:31.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:31.534 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=62275 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 62275 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 62275 ']' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.534 17:06:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.534 [2024-12-06 17:06:13.725784] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:06:31.534 [2024-12-06 17:06:13.725882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.793 [2024-12-06 17:06:13.879804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.793 [2024-12-06 17:06:13.918695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:31.793 [2024-12-06 17:06:13.919199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:31.793 [2024-12-06 17:06:13.919476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:31.793 [2024-12-06 17:06:13.919580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:31.793 [2024-12-06 17:06:13.919661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:31.793 [2024-12-06 17:06:13.920741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.793 [2024-12-06 17:06:13.920835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.793 [2024-12-06 17:06:13.920854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.793 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.793 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:31.793 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:31.793 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:31.793 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:31.793 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.793 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:31.794 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:32.052 [2024-12-06 17:06:14.321103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.052 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:32.620 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:06:32.961 [2024-12-06 17:06:14.921854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:32.961 17:06:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:32.961 17:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:33.220 Malloc0 00:06:33.476 17:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:33.733 Delay0 00:06:33.733 17:06:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.991 17:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:34.249 NULL1 00:06:34.249 17:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:34.507 17:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=62398 00:06:34.507 17:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:34.507 17:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:34.507 17:06:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.883 Read completed with error (sct=0, sc=11) 00:06:35.883 17:06:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.141 17:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:36.141 17:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:36.399 true 00:06:36.399 17:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:36.399 17:06:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.333 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.333 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:37.333 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:37.591 true 00:06:37.591 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:37.591 17:06:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.849 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.107 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:38.107 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:38.365 true 00:06:38.623 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:38.623 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.887 17:06:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.148 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:39.148 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:39.406 true 00:06:39.406 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:39.406 17:06:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.339 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.339 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:40.339 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:40.597 true 00:06:40.597 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:40.597 17:06:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.855 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.114 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:41.114 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:41.375 true 00:06:41.633 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:41.633 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.891 17:06:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.150 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:42.150 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:42.409 true 00:06:42.409 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:42.409 17:06:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.344 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.344 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:43.344 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:43.909 true 00:06:43.909 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:43.909 17:06:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.167 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.425 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:44.425 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:44.682 true 00:06:44.682 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:44.682 17:06:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.940 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.197 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:45.197 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:45.763 true 00:06:45.763 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:45.763 17:06:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.022 17:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.281 17:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:46.281 17:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:46.540 true 00:06:46.540 17:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:46.540 17:06:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.798 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.366 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:47.366 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:47.624 true 00:06:47.624 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:47.624 17:06:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.882 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.204 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:48.204 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:48.462 true 00:06:48.462 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:48.462 17:06:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.397 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.397 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:49.397 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:49.964 true 00:06:49.964 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:49.964 17:06:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.223 17:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.482 17:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:50.482 17:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:50.740 true 00:06:50.740 17:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:50.740 17:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.999 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.257 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:51.257 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:51.516 true 00:06:51.516 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:51.516 17:06:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.451 17:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.451 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:52.451 17:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:52.451 17:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:52.710 true 00:06:52.710 17:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:52.710 17:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.968 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.226 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:53.226 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:53.483 true 00:06:53.741 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:53.741 17:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.998 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.255 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:54.255 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:54.512 true 00:06:54.512 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:54.512 17:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.446 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.446 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:55.446 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:55.705 true 00:06:55.705 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:55.705 17:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.962 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.528 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:56.528 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:56.528 true 00:06:56.786 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:56.786 17:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.044 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.303 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:57.303 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:57.561 true 00:06:57.561 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:57.561 17:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.820 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.078 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:58.078 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:58.336 true 00:06:58.336 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:58.336 17:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.287 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.547 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:59.547 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:59.807 true 00:06:59.807 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:06:59.807 17:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.065 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.324 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:00.324 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:00.582 true 00:07:00.582 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:07:00.582 17:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.841 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.100 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:01.100 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:01.359 true 00:07:01.359 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:07:01.359 17:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.294 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.552 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:02.552 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:02.810 true 00:07:02.810 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:07:02.810 17:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.067 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.324 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:03.324 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:03.582 true 00:07:03.582 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:07:03.582 17:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.840 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.099 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:04.099 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:04.358 true 00:07:04.358 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:07:04.358 17:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.295 Initializing NVMe Controllers 00:07:05.295 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:05.295 Controller IO queue size 128, less than required. 00:07:05.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:05.295 Controller IO queue size 128, less than required. 00:07:05.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:05.295 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:05.295 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:05.295 Initialization complete. Launching workers. 00:07:05.295 ======================================================== 00:07:05.295 Latency(us) 00:07:05.295 Device Information : IOPS MiB/s Average min max 00:07:05.295 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 352.93 0.17 132041.34 3397.15 1065215.47 00:07:05.295 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7009.37 3.42 18260.64 3485.58 570210.55 00:07:05.295 ======================================================== 00:07:05.295 Total : 7362.30 3.59 23715.05 3397.15 1065215.47 00:07:05.295 00:07:05.295 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.554 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:05.554 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:05.847 true 00:07:05.847 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 62398 00:07:05.847 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (62398) - No such process 00:07:05.847 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 62398 00:07:05.847 17:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.109 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:06.368 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:06.368 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:06.368 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:06.368 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.368 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:06.627 null0 00:07:06.627 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.627 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.627 17:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:06.886 null1 00:07:06.886 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.886 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.886 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:07.144 null2 00:07:07.144 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.144 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.144 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:07.403 null3 00:07:07.403 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.403 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.403 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:07.662 null4 00:07:07.662 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.662 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.662 17:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:07.920 null5 00:07:08.178 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.178 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.178 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:08.435 null6 00:07:08.435 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.435 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.435 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:08.693 null7 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.693 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 63468 63469 63471 63473 63474 63476 63479 63482 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.694 17:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.952 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.952 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.952 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.952 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.952 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.952 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.952 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.212 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.470 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.729 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.729 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.729 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.729 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.729 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.729 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.729 17:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.729 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.729 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.729 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.988 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.247 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.247 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.247 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.247 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.247 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.506 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.766 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.766 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.766 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.766 17:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.766 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.766 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.766 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.766 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.025 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.284 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.542 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.798 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.798 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.798 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.798 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.798 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.798 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.799 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.799 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.799 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 17:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.799 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.799 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.799 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.057 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.314 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.572 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.828 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.828 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.828 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.828 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.828 17:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.828 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.828 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.085 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.342 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.599 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.856 17:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.856 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.856 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.856 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.856 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.856 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.113 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.370 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.628 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.884 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.884 17:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.884 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.152 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.152 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.152 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.152 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.152 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.152 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.152 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.152 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.423 rmmod nvme_tcp 00:07:15.423 rmmod nvme_fabrics 00:07:15.423 rmmod nvme_keyring 00:07:15.423 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 62275 ']' 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 62275 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 62275 ']' 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 62275 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62275 00:07:15.681 killing process with pid 62275 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62275' 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 62275 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 62275 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:15.681 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:15.682 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:15.682 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:15.682 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:15.939 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:15.939 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:15.939 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:15.939 17:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:07:15.939 00:07:15.939 real 0m45.120s 00:07:15.939 user 3m44.698s 00:07:15.939 sys 0m12.388s 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.939 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:15.939 ************************************ 00:07:15.939 END TEST nvmf_ns_hotplug_stress 00:07:15.939 ************************************ 00:07:15.940 17:06:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:15.940 17:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.940 17:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.940 17:06:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.940 ************************************ 00:07:15.940 START TEST nvmf_delete_subsystem 00:07:15.940 ************************************ 00:07:15.940 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:16.198 * Looking for test storage... 00:07:16.198 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.198 --rc genhtml_branch_coverage=1 00:07:16.198 --rc genhtml_function_coverage=1 00:07:16.198 --rc genhtml_legend=1 00:07:16.198 --rc geninfo_all_blocks=1 00:07:16.198 --rc geninfo_unexecuted_blocks=1 00:07:16.198 00:07:16.198 ' 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.198 --rc genhtml_branch_coverage=1 00:07:16.198 --rc genhtml_function_coverage=1 00:07:16.198 --rc genhtml_legend=1 00:07:16.198 --rc geninfo_all_blocks=1 00:07:16.198 --rc geninfo_unexecuted_blocks=1 00:07:16.198 00:07:16.198 ' 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.198 --rc genhtml_branch_coverage=1 00:07:16.198 --rc genhtml_function_coverage=1 00:07:16.198 --rc genhtml_legend=1 00:07:16.198 --rc geninfo_all_blocks=1 00:07:16.198 --rc geninfo_unexecuted_blocks=1 00:07:16.198 00:07:16.198 ' 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:16.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.198 --rc genhtml_branch_coverage=1 00:07:16.198 --rc genhtml_function_coverage=1 00:07:16.198 --rc genhtml_legend=1 00:07:16.198 --rc geninfo_all_blocks=1 00:07:16.198 --rc geninfo_unexecuted_blocks=1 00:07:16.198 00:07:16.198 ' 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:16.198 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:16.199 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:16.199 Cannot find device "nvmf_init_br" 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:16.199 Cannot find device "nvmf_init_br2" 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:16.199 Cannot find device "nvmf_tgt_br" 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:16.199 Cannot find device "nvmf_tgt_br2" 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:16.199 Cannot find device "nvmf_init_br" 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:16.199 Cannot find device "nvmf_init_br2" 00:07:16.199 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:07:16.200 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:16.200 Cannot find device "nvmf_tgt_br" 00:07:16.200 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:07:16.200 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:16.200 Cannot find device "nvmf_tgt_br2" 00:07:16.200 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:07:16.200 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:16.457 Cannot find device "nvmf_br" 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:16.457 Cannot find device "nvmf_init_if" 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:16.457 Cannot find device "nvmf_init_if2" 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:16.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:16.457 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:16.457 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:16.458 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:16.716 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:16.716 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:16.716 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.096 ms 00:07:16.716 00:07:16.716 --- 10.0.0.3 ping statistics --- 00:07:16.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.716 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:16.717 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:16.717 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:07:16.717 00:07:16.717 --- 10.0.0.4 ping statistics --- 00:07:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.717 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:16.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:07:16.717 00:07:16.717 --- 10.0.0.1 ping statistics --- 00:07:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.717 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:16.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:07:16.717 00:07:16.717 --- 10.0.0.2 ping statistics --- 00:07:16.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.717 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=64881 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 64881 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 64881 ']' 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.717 17:06:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.717 [2024-12-06 17:06:58.860621] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:07:16.717 [2024-12-06 17:06:58.860713] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.717 [2024-12-06 17:06:59.008045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.975 [2024-12-06 17:06:59.042607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.975 [2024-12-06 17:06:59.042675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.975 [2024-12-06 17:06:59.042687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.975 [2024-12-06 17:06:59.042695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.975 [2024-12-06 17:06:59.042703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.975 [2024-12-06 17:06:59.043483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.975 [2024-12-06 17:06:59.043497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.975 [2024-12-06 17:06:59.179795] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.975 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.976 [2024-12-06 17:06:59.196401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.976 NULL1 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.976 Delay0 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=64913 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:16.976 17:06:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:17.235 [2024-12-06 17:06:59.410660] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:19.135 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.135 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.135 17:07:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 [2024-12-06 17:07:01.447020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1a50 is same with the state(6) to be set 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 [2024-12-06 17:07:01.448146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a4ea0 is same with the state(6) to be set 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 starting I/O failed: -6 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 [2024-12-06 17:07:01.449730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6f74000c40 is same with the state(6) to be set 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Read completed with error (sct=0, sc=8) 00:07:19.395 Write completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Write completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:19.396 Read completed with error (sct=0, sc=8) 00:07:20.332 [2024-12-06 17:07:02.426730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2096aa0 is same with the state(6) to be set 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 [2024-12-06 17:07:02.448852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a27e0 is same with the state(6) to be set 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 [2024-12-06 17:07:02.449066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a1c30 is same with the state(6) to be set 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 [2024-12-06 17:07:02.449776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6f7400d800 is same with the state(6) to be set 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Write completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 Read completed with error (sct=0, sc=8) 00:07:20.332 [2024-12-06 17:07:02.450440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6f7400d020 is same with the state(6) to be set 00:07:20.332 Initializing NVMe Controllers 00:07:20.332 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:20.332 Controller IO queue size 128, less than required. 00:07:20.332 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:20.332 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:20.332 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:20.332 Initialization complete. Launching workers. 00:07:20.332 ======================================================== 00:07:20.332 Latency(us) 00:07:20.332 Device Information : IOPS MiB/s Average min max 00:07:20.332 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.28 0.08 893271.92 1157.34 1012148.78 00:07:20.332 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.82 0.08 902220.98 1094.95 1014517.95 00:07:20.332 ======================================================== 00:07:20.332 Total : 339.09 0.17 897674.39 1094.95 1014517.95 00:07:20.332 00:07:20.332 [2024-12-06 17:07:02.451282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2096aa0 (9): Bad file descriptor 00:07:20.332 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:20.332 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.333 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:20.333 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64913 00:07:20.333 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64913 00:07:20.899 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (64913) - No such process 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 64913 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 64913 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 64913 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.899 [2024-12-06 17:07:02.975987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=64964 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64964 00:07:20.899 17:07:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:20.899 [2024-12-06 17:07:03.164530] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:21.466 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.466 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64964 00:07:21.466 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:21.724 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:21.724 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64964 00:07:21.724 17:07:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.291 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:22.291 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64964 00:07:22.291 17:07:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:22.872 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:22.872 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64964 00:07:22.872 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:23.437 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:23.437 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64964 00:07:23.437 17:07:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.003 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.003 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64964 00:07:24.003 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.003 Initializing NVMe Controllers 00:07:24.003 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:07:24.003 Controller IO queue size 128, less than required. 00:07:24.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:24.003 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:24.003 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:24.003 Initialization complete. Launching workers. 00:07:24.003 ======================================================== 00:07:24.003 Latency(us) 00:07:24.003 Device Information : IOPS MiB/s Average min max 00:07:24.003 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003639.30 1000177.94 1042557.65 00:07:24.003 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005958.95 1000199.20 1042440.45 00:07:24.003 ======================================================== 00:07:24.003 Total : 256.00 0.12 1004799.13 1000177.94 1042557.65 00:07:24.003 00:07:24.261 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.261 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64964 00:07:24.261 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (64964) - No such process 00:07:24.261 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 64964 00:07:24.261 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:24.261 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:24.261 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:24.261 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.606 rmmod nvme_tcp 00:07:24.606 rmmod nvme_fabrics 00:07:24.606 rmmod nvme_keyring 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 64881 ']' 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 64881 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 64881 ']' 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 64881 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64881 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.606 killing process with pid 64881 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64881' 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 64881 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 64881 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:24.606 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.881 17:07:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:07:24.881 00:07:24.881 real 0m8.830s 00:07:24.881 user 0m27.358s 00:07:24.881 sys 0m1.448s 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.881 ************************************ 00:07:24.881 END TEST nvmf_delete_subsystem 00:07:24.881 ************************************ 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.881 ************************************ 00:07:24.881 START TEST nvmf_host_management 00:07:24.881 ************************************ 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:24.881 * Looking for test storage... 00:07:24.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.881 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.141 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:25.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.141 --rc genhtml_branch_coverage=1 00:07:25.141 --rc genhtml_function_coverage=1 00:07:25.141 --rc genhtml_legend=1 00:07:25.141 --rc geninfo_all_blocks=1 00:07:25.142 --rc geninfo_unexecuted_blocks=1 00:07:25.142 00:07:25.142 ' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.142 --rc genhtml_branch_coverage=1 00:07:25.142 --rc genhtml_function_coverage=1 00:07:25.142 --rc genhtml_legend=1 00:07:25.142 --rc geninfo_all_blocks=1 00:07:25.142 --rc geninfo_unexecuted_blocks=1 00:07:25.142 00:07:25.142 ' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.142 --rc genhtml_branch_coverage=1 00:07:25.142 --rc genhtml_function_coverage=1 00:07:25.142 --rc genhtml_legend=1 00:07:25.142 --rc geninfo_all_blocks=1 00:07:25.142 --rc geninfo_unexecuted_blocks=1 00:07:25.142 00:07:25.142 ' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:25.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.142 --rc genhtml_branch_coverage=1 00:07:25.142 --rc genhtml_function_coverage=1 00:07:25.142 --rc genhtml_legend=1 00:07:25.142 --rc geninfo_all_blocks=1 00:07:25.142 --rc geninfo_unexecuted_blocks=1 00:07:25.142 00:07:25.142 ' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.142 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:25.142 Cannot find device "nvmf_init_br" 00:07:25.142 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:25.143 Cannot find device "nvmf_init_br2" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:25.143 Cannot find device "nvmf_tgt_br" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:25.143 Cannot find device "nvmf_tgt_br2" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:25.143 Cannot find device "nvmf_init_br" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:25.143 Cannot find device "nvmf_init_br2" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:25.143 Cannot find device "nvmf_tgt_br" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:25.143 Cannot find device "nvmf_tgt_br2" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:25.143 Cannot find device "nvmf_br" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:25.143 Cannot find device "nvmf_init_if" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:25.143 Cannot find device "nvmf_init_if2" 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:25.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:25.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:07:25.143 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:25.401 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:25.402 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:25.402 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:07:25.402 00:07:25.402 --- 10.0.0.3 ping statistics --- 00:07:25.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.402 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:25.402 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:25.402 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:07:25.402 00:07:25.402 --- 10.0.0.4 ping statistics --- 00:07:25.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.402 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:25.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:25.402 00:07:25.402 --- 10.0.0.1 ping statistics --- 00:07:25.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.402 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:25.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:07:25.402 00:07:25.402 --- 10.0.0.2 ping statistics --- 00:07:25.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.402 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.402 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=65248 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 65248 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65248 ']' 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.661 17:07:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.661 [2024-12-06 17:07:07.773942] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:07:25.661 [2024-12-06 17:07:07.774045] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.661 [2024-12-06 17:07:07.929938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.919 [2024-12-06 17:07:07.973364] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.919 [2024-12-06 17:07:07.973429] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.919 [2024-12-06 17:07:07.973443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.919 [2024-12-06 17:07:07.973453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.919 [2024-12-06 17:07:07.973462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.919 [2024-12-06 17:07:07.974397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.919 [2024-12-06 17:07:07.974498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.919 [2024-12-06 17:07:07.974768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:25.919 [2024-12-06 17:07:07.974774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.919 [2024-12-06 17:07:08.108356] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:25.919 Malloc0 00:07:25.919 [2024-12-06 17:07:08.172034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:25.919 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.920 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:25.920 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.920 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=65306 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 65306 /var/tmp/bdevperf.sock 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 65306 ']' 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:26.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:26.178 { 00:07:26.178 "params": { 00:07:26.178 "name": "Nvme$subsystem", 00:07:26.178 "trtype": "$TEST_TRANSPORT", 00:07:26.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:26.178 "adrfam": "ipv4", 00:07:26.178 "trsvcid": "$NVMF_PORT", 00:07:26.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:26.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:26.178 "hdgst": ${hdgst:-false}, 00:07:26.178 "ddgst": ${ddgst:-false} 00:07:26.178 }, 00:07:26.178 "method": "bdev_nvme_attach_controller" 00:07:26.178 } 00:07:26.178 EOF 00:07:26.178 )") 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:26.178 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:26.178 "params": { 00:07:26.178 "name": "Nvme0", 00:07:26.178 "trtype": "tcp", 00:07:26.178 "traddr": "10.0.0.3", 00:07:26.178 "adrfam": "ipv4", 00:07:26.178 "trsvcid": "4420", 00:07:26.178 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:26.178 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:26.178 "hdgst": false, 00:07:26.178 "ddgst": false 00:07:26.178 }, 00:07:26.178 "method": "bdev_nvme_attach_controller" 00:07:26.178 }' 00:07:26.178 [2024-12-06 17:07:08.284731] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:07:26.178 [2024-12-06 17:07:08.284826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65306 ] 00:07:26.178 [2024-12-06 17:07:08.426212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.178 [2024-12-06 17:07:08.459153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.437 Running I/O for 10 seconds... 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:26.437 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:26.696 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:26.696 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:26.696 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:26.696 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.696 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.696 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:26.956 17:07:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.956 [2024-12-06 17:07:09.056821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ade20 is same with the state(6) to be set 00:07:26.956 [2024-12-06 17:07:09.056879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ade20 is same with the state(6) to be set 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.956 [2024-12-06 17:07:09.062514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.956 [2024-12-06 17:07:09.062570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.062588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.956 [2024-12-06 17:07:09.062600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.062613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.956 [2024-12-06 17:07:09.062624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.062636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:26.956 [2024-12-06 17:07:09.062647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.062659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657130 is same with the state(6) to be set 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.956 17:07:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:26.956 [2024-12-06 17:07:09.081404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1657130 (9): Bad file descriptor 00:07:26.956 [2024-12-06 17:07:09.081539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.081968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.081983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.082002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.082018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.082041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.956 [2024-12-06 17:07:09.082056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.956 [2024-12-06 17:07:09.082071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.082980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.082992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.083005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.083016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.083033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.083044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.083058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.083070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.083083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.957 [2024-12-06 17:07:09.083095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.957 [2024-12-06 17:07:09.083108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.083134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.083164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.083191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.083216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.083292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.083327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.083360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:26.958 [2024-12-06 17:07:09.083376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:26.958 [2024-12-06 17:07:09.084936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:26.958 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:26.958 00:07:26.958 Latency(us) 00:07:26.958 [2024-12-06T17:07:09.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.958 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:26.958 Job: Nvme0n1 ended in about 0.48 seconds with error 00:07:26.958 Verification LBA range: start 0x0 length 0x400 00:07:26.958 Nvme0n1 : 0.48 1205.26 75.33 133.92 0.00 46240.73 2472.49 41704.73 00:07:26.958 [2024-12-06T17:07:09.256Z] =================================================================================================================== 00:07:26.958 [2024-12-06T17:07:09.256Z] Total : 1205.26 75.33 133.92 0.00 46240.73 2472.49 41704.73 00:07:26.958 [2024-12-06 17:07:09.087570] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:26.958 [2024-12-06 17:07:09.094771] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 65306 00:07:27.893 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (65306) - No such process 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:27.893 { 00:07:27.893 "params": { 00:07:27.893 "name": "Nvme$subsystem", 00:07:27.893 "trtype": "$TEST_TRANSPORT", 00:07:27.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:27.893 "adrfam": "ipv4", 00:07:27.893 "trsvcid": "$NVMF_PORT", 00:07:27.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:27.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:27.893 "hdgst": ${hdgst:-false}, 00:07:27.893 "ddgst": ${ddgst:-false} 00:07:27.893 }, 00:07:27.893 "method": "bdev_nvme_attach_controller" 00:07:27.893 } 00:07:27.893 EOF 00:07:27.893 )") 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:27.893 17:07:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:27.893 "params": { 00:07:27.893 "name": "Nvme0", 00:07:27.893 "trtype": "tcp", 00:07:27.893 "traddr": "10.0.0.3", 00:07:27.893 "adrfam": "ipv4", 00:07:27.893 "trsvcid": "4420", 00:07:27.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:27.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:27.893 "hdgst": false, 00:07:27.893 "ddgst": false 00:07:27.893 }, 00:07:27.893 "method": "bdev_nvme_attach_controller" 00:07:27.893 }' 00:07:27.893 [2024-12-06 17:07:10.132177] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:07:27.893 [2024-12-06 17:07:10.132283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65347 ] 00:07:28.151 [2024-12-06 17:07:10.282845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.151 [2024-12-06 17:07:10.316463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.410 Running I/O for 1 seconds... 00:07:29.342 1344.00 IOPS, 84.00 MiB/s 00:07:29.342 Latency(us) 00:07:29.342 [2024-12-06T17:07:11.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.342 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:29.342 Verification LBA range: start 0x0 length 0x400 00:07:29.342 Nvme0n1 : 1.03 1367.75 85.48 0.00 0.00 45823.08 5302.46 37891.72 00:07:29.342 [2024-12-06T17:07:11.640Z] =================================================================================================================== 00:07:29.342 [2024-12-06T17:07:11.640Z] Total : 1367.75 85.48 0.00 0.00 45823.08 5302.46 37891.72 00:07:29.342 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:29.342 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:29.342 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:07:29.342 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:07:29.342 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:29.342 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:29.342 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:29.599 rmmod nvme_tcp 00:07:29.599 rmmod nvme_fabrics 00:07:29.599 rmmod nvme_keyring 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 65248 ']' 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 65248 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 65248 ']' 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 65248 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65248 00:07:29.599 killing process with pid 65248 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65248' 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 65248 00:07:29.599 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 65248 00:07:29.599 [2024-12-06 17:07:11.890012] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:29.857 17:07:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:29.857 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:29.857 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:29.857 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:29.857 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:29.857 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:29.857 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:29.858 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:29.858 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.858 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:29.858 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:30.116 00:07:30.116 real 0m5.108s 00:07:30.116 user 0m18.191s 00:07:30.116 sys 0m1.238s 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.116 ************************************ 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:30.116 END TEST nvmf_host_management 00:07:30.116 ************************************ 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.116 ************************************ 00:07:30.116 START TEST nvmf_lvol 00:07:30.116 ************************************ 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:30.116 * Looking for test storage... 00:07:30.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.116 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.376 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.377 --rc genhtml_branch_coverage=1 00:07:30.377 --rc genhtml_function_coverage=1 00:07:30.377 --rc genhtml_legend=1 00:07:30.377 --rc geninfo_all_blocks=1 00:07:30.377 --rc geninfo_unexecuted_blocks=1 00:07:30.377 00:07:30.377 ' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.377 --rc genhtml_branch_coverage=1 00:07:30.377 --rc genhtml_function_coverage=1 00:07:30.377 --rc genhtml_legend=1 00:07:30.377 --rc geninfo_all_blocks=1 00:07:30.377 --rc geninfo_unexecuted_blocks=1 00:07:30.377 00:07:30.377 ' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.377 --rc genhtml_branch_coverage=1 00:07:30.377 --rc genhtml_function_coverage=1 00:07:30.377 --rc genhtml_legend=1 00:07:30.377 --rc geninfo_all_blocks=1 00:07:30.377 --rc geninfo_unexecuted_blocks=1 00:07:30.377 00:07:30.377 ' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.377 --rc genhtml_branch_coverage=1 00:07:30.377 --rc genhtml_function_coverage=1 00:07:30.377 --rc genhtml_legend=1 00:07:30.377 --rc geninfo_all_blocks=1 00:07:30.377 --rc geninfo_unexecuted_blocks=1 00:07:30.377 00:07:30.377 ' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.377 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:30.377 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:30.378 Cannot find device "nvmf_init_br" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:30.378 Cannot find device "nvmf_init_br2" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:30.378 Cannot find device "nvmf_tgt_br" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:30.378 Cannot find device "nvmf_tgt_br2" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:30.378 Cannot find device "nvmf_init_br" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:30.378 Cannot find device "nvmf_init_br2" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:30.378 Cannot find device "nvmf_tgt_br" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:30.378 Cannot find device "nvmf_tgt_br2" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:30.378 Cannot find device "nvmf_br" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:30.378 Cannot find device "nvmf_init_if" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:30.378 Cannot find device "nvmf_init_if2" 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:30.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:30.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:30.378 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:30.638 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:30.638 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:07:30.638 00:07:30.638 --- 10.0.0.3 ping statistics --- 00:07:30.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.638 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:30.638 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:30.638 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:07:30.638 00:07:30.638 --- 10.0.0.4 ping statistics --- 00:07:30.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.638 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:30.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:07:30.638 00:07:30.638 --- 10.0.0.1 ping statistics --- 00:07:30.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.638 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:30.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:07:30.638 00:07:30.638 --- 10.0.0.2 ping statistics --- 00:07:30.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.638 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65623 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65623 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65623 ']' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.638 17:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.638 [2024-12-06 17:07:12.916670] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:07:30.638 [2024-12-06 17:07:12.916967] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.897 [2024-12-06 17:07:13.070113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.897 [2024-12-06 17:07:13.112855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.897 [2024-12-06 17:07:13.113141] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.897 [2024-12-06 17:07:13.113371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.897 [2024-12-06 17:07:13.113527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.897 [2024-12-06 17:07:13.113570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.897 [2024-12-06 17:07:13.114486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.897 [2024-12-06 17:07:13.114621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.897 [2024-12-06 17:07:13.114627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.156 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.156 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:31.156 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.156 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.156 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.156 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.156 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.413 [2024-12-06 17:07:13.534051] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.413 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.670 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:31.671 17:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.236 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:32.236 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:32.507 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:32.765 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=40d13ba8-74b7-43d6-9c6e-0396d5da6eab 00:07:32.765 17:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 40d13ba8-74b7-43d6-9c6e-0396d5da6eab lvol 20 00:07:33.022 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d4be4e8c-fd8a-47cf-becb-3eb32d7090f8 00:07:33.022 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.280 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d4be4e8c-fd8a-47cf-becb-3eb32d7090f8 00:07:33.538 17:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:34.104 [2024-12-06 17:07:16.105535] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:34.104 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:34.104 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:34.104 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65757 00:07:34.104 17:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:35.521 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d4be4e8c-fd8a-47cf-becb-3eb32d7090f8 MY_SNAPSHOT 00:07:35.521 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e4adb12d-4de4-4507-a577-62452e73d7cc 00:07:35.521 17:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d4be4e8c-fd8a-47cf-becb-3eb32d7090f8 30 00:07:36.086 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone e4adb12d-4de4-4507-a577-62452e73d7cc MY_CLONE 00:07:36.345 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0f1a5c82-0f29-40a3-b853-00789ee459e2 00:07:36.345 17:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0f1a5c82-0f29-40a3-b853-00789ee459e2 00:07:36.911 17:07:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65757 00:07:45.022 Initializing NVMe Controllers 00:07:45.022 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:45.022 Controller IO queue size 128, less than required. 00:07:45.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.022 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:45.022 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:45.022 Initialization complete. Launching workers. 00:07:45.022 ======================================================== 00:07:45.022 Latency(us) 00:07:45.022 Device Information : IOPS MiB/s Average min max 00:07:45.022 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10630.10 41.52 12041.69 3177.33 76170.32 00:07:45.022 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10557.80 41.24 12126.77 2965.57 66708.18 00:07:45.022 ======================================================== 00:07:45.022 Total : 21187.90 82.77 12084.09 2965.57 76170.32 00:07:45.022 00:07:45.022 17:07:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:45.022 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d4be4e8c-fd8a-47cf-becb-3eb32d7090f8 00:07:45.280 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40d13ba8-74b7-43d6-9c6e-0396d5da6eab 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:45.538 rmmod nvme_tcp 00:07:45.538 rmmod nvme_fabrics 00:07:45.538 rmmod nvme_keyring 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65623 ']' 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65623 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65623 ']' 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65623 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65623 00:07:45.538 killing process with pid 65623 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65623' 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65623 00:07:45.538 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65623 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:45.796 17:07:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:45.796 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:45.796 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:45.796 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:45.796 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:45.796 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:45.796 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:45.796 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:46.055 00:07:46.055 real 0m15.964s 00:07:46.055 user 1m6.395s 00:07:46.055 sys 0m3.846s 00:07:46.055 ************************************ 00:07:46.055 END TEST nvmf_lvol 00:07:46.055 ************************************ 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.055 ************************************ 00:07:46.055 START TEST nvmf_lvs_grow 00:07:46.055 ************************************ 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:46.055 * Looking for test storage... 00:07:46.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.055 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.331 --rc genhtml_branch_coverage=1 00:07:46.331 --rc genhtml_function_coverage=1 00:07:46.331 --rc genhtml_legend=1 00:07:46.331 --rc geninfo_all_blocks=1 00:07:46.331 --rc geninfo_unexecuted_blocks=1 00:07:46.331 00:07:46.331 ' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.331 --rc genhtml_branch_coverage=1 00:07:46.331 --rc genhtml_function_coverage=1 00:07:46.331 --rc genhtml_legend=1 00:07:46.331 --rc geninfo_all_blocks=1 00:07:46.331 --rc geninfo_unexecuted_blocks=1 00:07:46.331 00:07:46.331 ' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.331 --rc genhtml_branch_coverage=1 00:07:46.331 --rc genhtml_function_coverage=1 00:07:46.331 --rc genhtml_legend=1 00:07:46.331 --rc geninfo_all_blocks=1 00:07:46.331 --rc geninfo_unexecuted_blocks=1 00:07:46.331 00:07:46.331 ' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.331 --rc genhtml_branch_coverage=1 00:07:46.331 --rc genhtml_function_coverage=1 00:07:46.331 --rc genhtml_legend=1 00:07:46.331 --rc geninfo_all_blocks=1 00:07:46.331 --rc geninfo_unexecuted_blocks=1 00:07:46.331 00:07:46.331 ' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.331 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:46.331 Cannot find device "nvmf_init_br" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:46.331 Cannot find device "nvmf_init_br2" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:46.331 Cannot find device "nvmf_tgt_br" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:46.331 Cannot find device "nvmf_tgt_br2" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:46.331 Cannot find device "nvmf_init_br" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:46.331 Cannot find device "nvmf_init_br2" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:46.331 Cannot find device "nvmf_tgt_br" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:46.331 Cannot find device "nvmf_tgt_br2" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:46.331 Cannot find device "nvmf_br" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:46.331 Cannot find device "nvmf_init_if" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:46.331 Cannot find device "nvmf_init_if2" 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:46.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:46.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:46.331 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:46.590 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:46.590 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.111 ms 00:07:46.590 00:07:46.590 --- 10.0.0.3 ping statistics --- 00:07:46.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.590 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:46.590 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:46.590 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:07:46.590 00:07:46.590 --- 10.0.0.4 ping statistics --- 00:07:46.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.590 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:46.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:07:46.590 00:07:46.590 --- 10.0.0.1 ping statistics --- 00:07:46.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.590 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:46.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:07:46.590 00:07:46.590 --- 10.0.0.2 ping statistics --- 00:07:46.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.590 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=66180 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 66180 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 66180 ']' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.590 17:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.848 [2024-12-06 17:07:28.929301] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:07:46.848 [2024-12-06 17:07:28.929396] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.848 [2024-12-06 17:07:29.079361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.848 [2024-12-06 17:07:29.117353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.848 [2024-12-06 17:07:29.117421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.848 [2024-12-06 17:07:29.117438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.848 [2024-12-06 17:07:29.117448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.848 [2024-12-06 17:07:29.117457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.848 [2024-12-06 17:07:29.117801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.105 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.105 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:47.105 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:47.105 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.105 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.105 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.105 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:47.363 [2024-12-06 17:07:29.550580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:47.363 ************************************ 00:07:47.363 START TEST lvs_grow_clean 00:07:47.363 ************************************ 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:47.363 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.930 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:47.930 17:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:48.189 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ba897889-5862-4d2c-aa27-3f7cf41ce208 00:07:48.189 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:07:48.189 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:48.448 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:48.448 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:48.448 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ba897889-5862-4d2c-aa27-3f7cf41ce208 lvol 150 00:07:48.706 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5743a9f0-0e60-4874-b4a1-5425a5e296b8 00:07:48.706 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:48.706 17:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:48.964 [2024-12-06 17:07:31.140395] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:48.964 [2024-12-06 17:07:31.140484] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:48.964 true 00:07:48.964 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:07:48.964 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:49.222 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:49.222 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:49.480 17:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5743a9f0-0e60-4874-b4a1-5425a5e296b8 00:07:49.737 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:49.996 [2024-12-06 17:07:32.277055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:50.254 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66340 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66340 /var/tmp/bdevperf.sock 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 66340 ']' 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.512 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:50.512 [2024-12-06 17:07:32.619668] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:07:50.512 [2024-12-06 17:07:32.619764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66340 ] 00:07:50.512 [2024-12-06 17:07:32.770377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.771 [2024-12-06 17:07:32.810977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.771 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.771 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:50.771 17:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:51.029 Nvme0n1 00:07:51.029 17:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:51.288 [ 00:07:51.288 { 00:07:51.288 "aliases": [ 00:07:51.288 "5743a9f0-0e60-4874-b4a1-5425a5e296b8" 00:07:51.288 ], 00:07:51.288 "assigned_rate_limits": { 00:07:51.288 "r_mbytes_per_sec": 0, 00:07:51.288 "rw_ios_per_sec": 0, 00:07:51.288 "rw_mbytes_per_sec": 0, 00:07:51.288 "w_mbytes_per_sec": 0 00:07:51.288 }, 00:07:51.288 "block_size": 4096, 00:07:51.288 "claimed": false, 00:07:51.288 "driver_specific": { 00:07:51.288 "mp_policy": "active_passive", 00:07:51.288 "nvme": [ 00:07:51.288 { 00:07:51.288 "ctrlr_data": { 00:07:51.288 "ana_reporting": false, 00:07:51.289 "cntlid": 1, 00:07:51.289 "firmware_revision": "25.01", 00:07:51.289 "model_number": "SPDK bdev Controller", 00:07:51.289 "multi_ctrlr": true, 00:07:51.289 "oacs": { 00:07:51.289 "firmware": 0, 00:07:51.289 "format": 0, 00:07:51.289 "ns_manage": 0, 00:07:51.289 "security": 0 00:07:51.289 }, 00:07:51.289 "serial_number": "SPDK0", 00:07:51.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.289 "vendor_id": "0x8086" 00:07:51.289 }, 00:07:51.289 "ns_data": { 00:07:51.289 "can_share": true, 00:07:51.289 "id": 1 00:07:51.289 }, 00:07:51.289 "trid": { 00:07:51.289 "adrfam": "IPv4", 00:07:51.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:51.289 "traddr": "10.0.0.3", 00:07:51.289 "trsvcid": "4420", 00:07:51.289 "trtype": "TCP" 00:07:51.289 }, 00:07:51.289 "vs": { 00:07:51.289 "nvme_version": "1.3" 00:07:51.289 } 00:07:51.289 } 00:07:51.289 ] 00:07:51.289 }, 00:07:51.289 "memory_domains": [ 00:07:51.289 { 00:07:51.289 "dma_device_id": "system", 00:07:51.289 "dma_device_type": 1 00:07:51.289 } 00:07:51.289 ], 00:07:51.289 "name": "Nvme0n1", 00:07:51.289 "num_blocks": 38912, 00:07:51.289 "numa_id": -1, 00:07:51.289 "product_name": "NVMe disk", 00:07:51.289 "supported_io_types": { 00:07:51.289 "abort": true, 00:07:51.289 "compare": true, 00:07:51.289 "compare_and_write": true, 00:07:51.289 "copy": true, 00:07:51.289 "flush": true, 00:07:51.289 "get_zone_info": false, 00:07:51.289 "nvme_admin": true, 00:07:51.289 "nvme_io": true, 00:07:51.289 "nvme_io_md": false, 00:07:51.289 "nvme_iov_md": false, 00:07:51.289 "read": true, 00:07:51.289 "reset": true, 00:07:51.289 "seek_data": false, 00:07:51.289 "seek_hole": false, 00:07:51.289 "unmap": true, 00:07:51.289 "write": true, 00:07:51.289 "write_zeroes": true, 00:07:51.289 "zcopy": false, 00:07:51.289 "zone_append": false, 00:07:51.289 "zone_management": false 00:07:51.289 }, 00:07:51.289 "uuid": "5743a9f0-0e60-4874-b4a1-5425a5e296b8", 00:07:51.289 "zoned": false 00:07:51.289 } 00:07:51.289 ] 00:07:51.289 17:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66374 00:07:51.289 17:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:51.289 17:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:51.547 Running I/O for 10 seconds... 00:07:52.482 Latency(us) 00:07:52.482 [2024-12-06T17:07:34.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:52.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.482 Nvme0n1 : 1.00 7615.00 29.75 0.00 0.00 0.00 0.00 0.00 00:07:52.482 [2024-12-06T17:07:34.780Z] =================================================================================================================== 00:07:52.482 [2024-12-06T17:07:34.780Z] Total : 7615.00 29.75 0.00 0.00 0.00 0.00 0.00 00:07:52.482 00:07:53.437 17:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:07:53.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.437 Nvme0n1 : 2.00 7705.00 30.10 0.00 0.00 0.00 0.00 0.00 00:07:53.437 [2024-12-06T17:07:35.735Z] =================================================================================================================== 00:07:53.437 [2024-12-06T17:07:35.735Z] Total : 7705.00 30.10 0.00 0.00 0.00 0.00 0.00 00:07:53.437 00:07:53.695 true 00:07:53.695 17:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:07:53.695 17:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:54.263 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:54.263 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:54.263 17:07:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 66374 00:07:54.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.522 Nvme0n1 : 3.00 7756.67 30.30 0.00 0.00 0.00 0.00 0.00 00:07:54.522 [2024-12-06T17:07:36.820Z] =================================================================================================================== 00:07:54.522 [2024-12-06T17:07:36.820Z] Total : 7756.67 30.30 0.00 0.00 0.00 0.00 0.00 00:07:54.522 00:07:55.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.489 Nvme0n1 : 4.00 7775.50 30.37 0.00 0.00 0.00 0.00 0.00 00:07:55.489 [2024-12-06T17:07:37.787Z] =================================================================================================================== 00:07:55.489 [2024-12-06T17:07:37.787Z] Total : 7775.50 30.37 0.00 0.00 0.00 0.00 0.00 00:07:55.489 00:07:56.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.426 Nvme0n1 : 5.00 7623.60 29.78 0.00 0.00 0.00 0.00 0.00 00:07:56.426 [2024-12-06T17:07:38.725Z] =================================================================================================================== 00:07:56.427 [2024-12-06T17:07:38.725Z] Total : 7623.60 29.78 0.00 0.00 0.00 0.00 0.00 00:07:56.427 00:07:57.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.816 Nvme0n1 : 6.00 7608.17 29.72 0.00 0.00 0.00 0.00 0.00 00:07:57.816 [2024-12-06T17:07:40.114Z] =================================================================================================================== 00:07:57.816 [2024-12-06T17:07:40.114Z] Total : 7608.17 29.72 0.00 0.00 0.00 0.00 0.00 00:07:57.816 00:07:58.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.751 Nvme0n1 : 7.00 7549.57 29.49 0.00 0.00 0.00 0.00 0.00 00:07:58.751 [2024-12-06T17:07:41.049Z] =================================================================================================================== 00:07:58.751 [2024-12-06T17:07:41.049Z] Total : 7549.57 29.49 0.00 0.00 0.00 0.00 0.00 00:07:58.751 00:07:59.706 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.706 Nvme0n1 : 8.00 7527.38 29.40 0.00 0.00 0.00 0.00 0.00 00:07:59.706 [2024-12-06T17:07:42.004Z] =================================================================================================================== 00:07:59.706 [2024-12-06T17:07:42.004Z] Total : 7527.38 29.40 0.00 0.00 0.00 0.00 0.00 00:07:59.706 00:08:00.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.642 Nvme0n1 : 9.00 7501.56 29.30 0.00 0.00 0.00 0.00 0.00 00:08:00.642 [2024-12-06T17:07:42.940Z] =================================================================================================================== 00:08:00.642 [2024-12-06T17:07:42.940Z] Total : 7501.56 29.30 0.00 0.00 0.00 0.00 0.00 00:08:00.642 00:08:01.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.577 Nvme0n1 : 10.00 7498.40 29.29 0.00 0.00 0.00 0.00 0.00 00:08:01.577 [2024-12-06T17:07:43.875Z] =================================================================================================================== 00:08:01.577 [2024-12-06T17:07:43.875Z] Total : 7498.40 29.29 0.00 0.00 0.00 0.00 0.00 00:08:01.577 00:08:01.577 00:08:01.577 Latency(us) 00:08:01.577 [2024-12-06T17:07:43.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.577 Nvme0n1 : 10.01 7500.74 29.30 0.00 0.00 17059.76 5987.61 103904.35 00:08:01.577 [2024-12-06T17:07:43.875Z] =================================================================================================================== 00:08:01.577 [2024-12-06T17:07:43.875Z] Total : 7500.74 29.30 0.00 0.00 17059.76 5987.61 103904.35 00:08:01.577 { 00:08:01.577 "results": [ 00:08:01.577 { 00:08:01.577 "job": "Nvme0n1", 00:08:01.577 "core_mask": "0x2", 00:08:01.577 "workload": "randwrite", 00:08:01.577 "status": "finished", 00:08:01.577 "queue_depth": 128, 00:08:01.577 "io_size": 4096, 00:08:01.577 "runtime": 10.013949, 00:08:01.577 "iops": 7500.737221649521, 00:08:01.577 "mibps": 29.299754772068443, 00:08:01.577 "io_failed": 0, 00:08:01.577 "io_timeout": 0, 00:08:01.577 "avg_latency_us": 17059.760680874137, 00:08:01.577 "min_latency_us": 5987.607272727273, 00:08:01.577 "max_latency_us": 103904.34909090909 00:08:01.577 } 00:08:01.577 ], 00:08:01.577 "core_count": 1 00:08:01.577 } 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66340 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 66340 ']' 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 66340 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66340 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:01.577 killing process with pid 66340 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66340' 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 66340 00:08:01.577 Received shutdown signal, test time was about 10.000000 seconds 00:08:01.577 00:08:01.577 Latency(us) 00:08:01.577 [2024-12-06T17:07:43.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.577 [2024-12-06T17:07:43.875Z] =================================================================================================================== 00:08:01.577 [2024-12-06T17:07:43.875Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:01.577 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 66340 00:08:01.836 17:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:02.095 17:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:02.354 17:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:08:02.354 17:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:02.613 17:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:02.613 17:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:02.613 17:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.889 [2024-12-06 17:07:45.040028] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:02.889 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:08:03.147 2024/12/06 17:07:45 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ba897889-5862-4d2c-aa27-3f7cf41ce208], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:03.147 request: 00:08:03.147 { 00:08:03.147 "method": "bdev_lvol_get_lvstores", 00:08:03.147 "params": { 00:08:03.147 "uuid": "ba897889-5862-4d2c-aa27-3f7cf41ce208" 00:08:03.147 } 00:08:03.147 } 00:08:03.147 Got JSON-RPC error response 00:08:03.147 GoRPCClient: error on JSON-RPC call 00:08:03.147 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:03.147 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:03.147 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:03.147 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:03.147 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:03.714 aio_bdev 00:08:03.714 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5743a9f0-0e60-4874-b4a1-5425a5e296b8 00:08:03.714 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=5743a9f0-0e60-4874-b4a1-5425a5e296b8 00:08:03.714 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.714 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:03.714 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.714 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.714 17:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:03.972 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5743a9f0-0e60-4874-b4a1-5425a5e296b8 -t 2000 00:08:04.229 [ 00:08:04.229 { 00:08:04.229 "aliases": [ 00:08:04.229 "lvs/lvol" 00:08:04.229 ], 00:08:04.229 "assigned_rate_limits": { 00:08:04.229 "r_mbytes_per_sec": 0, 00:08:04.229 "rw_ios_per_sec": 0, 00:08:04.229 "rw_mbytes_per_sec": 0, 00:08:04.229 "w_mbytes_per_sec": 0 00:08:04.229 }, 00:08:04.229 "block_size": 4096, 00:08:04.229 "claimed": false, 00:08:04.229 "driver_specific": { 00:08:04.229 "lvol": { 00:08:04.229 "base_bdev": "aio_bdev", 00:08:04.229 "clone": false, 00:08:04.229 "esnap_clone": false, 00:08:04.229 "lvol_store_uuid": "ba897889-5862-4d2c-aa27-3f7cf41ce208", 00:08:04.229 "num_allocated_clusters": 38, 00:08:04.229 "snapshot": false, 00:08:04.229 "thin_provision": false 00:08:04.229 } 00:08:04.229 }, 00:08:04.229 "name": "5743a9f0-0e60-4874-b4a1-5425a5e296b8", 00:08:04.229 "num_blocks": 38912, 00:08:04.229 "product_name": "Logical Volume", 00:08:04.229 "supported_io_types": { 00:08:04.229 "abort": false, 00:08:04.229 "compare": false, 00:08:04.229 "compare_and_write": false, 00:08:04.229 "copy": false, 00:08:04.229 "flush": false, 00:08:04.229 "get_zone_info": false, 00:08:04.229 "nvme_admin": false, 00:08:04.229 "nvme_io": false, 00:08:04.229 "nvme_io_md": false, 00:08:04.229 "nvme_iov_md": false, 00:08:04.229 "read": true, 00:08:04.229 "reset": true, 00:08:04.229 "seek_data": true, 00:08:04.229 "seek_hole": true, 00:08:04.229 "unmap": true, 00:08:04.229 "write": true, 00:08:04.229 "write_zeroes": true, 00:08:04.229 "zcopy": false, 00:08:04.229 "zone_append": false, 00:08:04.229 "zone_management": false 00:08:04.229 }, 00:08:04.229 "uuid": "5743a9f0-0e60-4874-b4a1-5425a5e296b8", 00:08:04.229 "zoned": false 00:08:04.229 } 00:08:04.229 ] 00:08:04.229 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:04.229 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:08:04.229 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:04.487 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:04.488 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:08:04.488 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:04.746 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:04.746 17:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5743a9f0-0e60-4874-b4a1-5425a5e296b8 00:08:05.004 17:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba897889-5862-4d2c-aa27-3f7cf41ce208 00:08:05.264 17:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:05.523 17:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:06.090 ************************************ 00:08:06.090 END TEST lvs_grow_clean 00:08:06.090 ************************************ 00:08:06.090 00:08:06.090 real 0m18.603s 00:08:06.091 user 0m17.958s 00:08:06.091 sys 0m2.136s 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.091 ************************************ 00:08:06.091 START TEST lvs_grow_dirty 00:08:06.091 ************************************ 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:06.091 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.349 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:06.349 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:06.917 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:06.917 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:06.917 17:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:07.175 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:07.175 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:07.175 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 24c6de82-7f11-4b22-8433-7c8227b4df8e lvol 150 00:08:07.432 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=10a1ab4a-0094-402a-ac4e-d8717206bb83 00:08:07.432 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:07.432 17:07:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:07.997 [2024-12-06 17:07:49.991298] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:07.997 [2024-12-06 17:07:49.991413] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:07.997 true 00:08:07.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:07.997 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.254 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:08.254 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.511 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 10a1ab4a-0094-402a-ac4e-d8717206bb83 00:08:08.768 17:07:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:09.026 [2024-12-06 17:07:51.231942] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:09.026 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:09.284 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66783 00:08:09.284 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.285 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.285 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66783 /var/tmp/bdevperf.sock 00:08:09.285 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66783 ']' 00:08:09.285 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.285 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.285 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.285 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.285 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.543 [2024-12-06 17:07:51.593385] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:09.543 [2024-12-06 17:07:51.593483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66783 ] 00:08:09.543 [2024-12-06 17:07:51.747174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.543 [2024-12-06 17:07:51.786651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.801 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.801 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:09.801 17:07:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.061 Nvme0n1 00:08:10.061 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.320 [ 00:08:10.320 { 00:08:10.320 "aliases": [ 00:08:10.320 "10a1ab4a-0094-402a-ac4e-d8717206bb83" 00:08:10.320 ], 00:08:10.320 "assigned_rate_limits": { 00:08:10.320 "r_mbytes_per_sec": 0, 00:08:10.320 "rw_ios_per_sec": 0, 00:08:10.320 "rw_mbytes_per_sec": 0, 00:08:10.320 "w_mbytes_per_sec": 0 00:08:10.320 }, 00:08:10.320 "block_size": 4096, 00:08:10.320 "claimed": false, 00:08:10.320 "driver_specific": { 00:08:10.320 "mp_policy": "active_passive", 00:08:10.320 "nvme": [ 00:08:10.320 { 00:08:10.320 "ctrlr_data": { 00:08:10.320 "ana_reporting": false, 00:08:10.320 "cntlid": 1, 00:08:10.320 "firmware_revision": "25.01", 00:08:10.320 "model_number": "SPDK bdev Controller", 00:08:10.320 "multi_ctrlr": true, 00:08:10.320 "oacs": { 00:08:10.320 "firmware": 0, 00:08:10.320 "format": 0, 00:08:10.320 "ns_manage": 0, 00:08:10.320 "security": 0 00:08:10.320 }, 00:08:10.320 "serial_number": "SPDK0", 00:08:10.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.320 "vendor_id": "0x8086" 00:08:10.320 }, 00:08:10.320 "ns_data": { 00:08:10.320 "can_share": true, 00:08:10.320 "id": 1 00:08:10.320 }, 00:08:10.320 "trid": { 00:08:10.320 "adrfam": "IPv4", 00:08:10.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.320 "traddr": "10.0.0.3", 00:08:10.320 "trsvcid": "4420", 00:08:10.320 "trtype": "TCP" 00:08:10.320 }, 00:08:10.320 "vs": { 00:08:10.320 "nvme_version": "1.3" 00:08:10.320 } 00:08:10.320 } 00:08:10.320 ] 00:08:10.320 }, 00:08:10.320 "memory_domains": [ 00:08:10.320 { 00:08:10.320 "dma_device_id": "system", 00:08:10.320 "dma_device_type": 1 00:08:10.320 } 00:08:10.320 ], 00:08:10.320 "name": "Nvme0n1", 00:08:10.320 "num_blocks": 38912, 00:08:10.320 "numa_id": -1, 00:08:10.320 "product_name": "NVMe disk", 00:08:10.320 "supported_io_types": { 00:08:10.320 "abort": true, 00:08:10.320 "compare": true, 00:08:10.320 "compare_and_write": true, 00:08:10.320 "copy": true, 00:08:10.320 "flush": true, 00:08:10.320 "get_zone_info": false, 00:08:10.320 "nvme_admin": true, 00:08:10.320 "nvme_io": true, 00:08:10.320 "nvme_io_md": false, 00:08:10.320 "nvme_iov_md": false, 00:08:10.320 "read": true, 00:08:10.320 "reset": true, 00:08:10.320 "seek_data": false, 00:08:10.320 "seek_hole": false, 00:08:10.320 "unmap": true, 00:08:10.320 "write": true, 00:08:10.320 "write_zeroes": true, 00:08:10.320 "zcopy": false, 00:08:10.320 "zone_append": false, 00:08:10.320 "zone_management": false 00:08:10.320 }, 00:08:10.320 "uuid": "10a1ab4a-0094-402a-ac4e-d8717206bb83", 00:08:10.320 "zoned": false 00:08:10.320 } 00:08:10.320 ] 00:08:10.320 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.320 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66817 00:08:10.320 17:07:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.579 Running I/O for 10 seconds... 00:08:11.515 Latency(us) 00:08:11.515 [2024-12-06T17:07:53.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.515 Nvme0n1 : 1.00 8320.00 32.50 0.00 0.00 0.00 0.00 0.00 00:08:11.515 [2024-12-06T17:07:53.813Z] =================================================================================================================== 00:08:11.515 [2024-12-06T17:07:53.813Z] Total : 8320.00 32.50 0.00 0.00 0.00 0.00 0.00 00:08:11.515 00:08:12.451 17:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:12.451 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.451 Nvme0n1 : 2.00 8147.50 31.83 0.00 0.00 0.00 0.00 0.00 00:08:12.451 [2024-12-06T17:07:54.749Z] =================================================================================================================== 00:08:12.451 [2024-12-06T17:07:54.749Z] Total : 8147.50 31.83 0.00 0.00 0.00 0.00 0.00 00:08:12.451 00:08:12.710 true 00:08:12.710 17:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:12.710 17:07:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.275 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.275 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.275 17:07:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66817 00:08:13.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.534 Nvme0n1 : 3.00 7938.67 31.01 0.00 0.00 0.00 0.00 0.00 00:08:13.534 [2024-12-06T17:07:55.832Z] =================================================================================================================== 00:08:13.534 [2024-12-06T17:07:55.832Z] Total : 7938.67 31.01 0.00 0.00 0.00 0.00 0.00 00:08:13.534 00:08:14.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.472 Nvme0n1 : 4.00 7878.00 30.77 0.00 0.00 0.00 0.00 0.00 00:08:14.472 [2024-12-06T17:07:56.770Z] =================================================================================================================== 00:08:14.472 [2024-12-06T17:07:56.770Z] Total : 7878.00 30.77 0.00 0.00 0.00 0.00 0.00 00:08:14.472 00:08:15.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.407 Nvme0n1 : 5.00 7813.40 30.52 0.00 0.00 0.00 0.00 0.00 00:08:15.407 [2024-12-06T17:07:57.705Z] =================================================================================================================== 00:08:15.407 [2024-12-06T17:07:57.705Z] Total : 7813.40 30.52 0.00 0.00 0.00 0.00 0.00 00:08:15.407 00:08:16.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.807 Nvme0n1 : 6.00 7777.17 30.38 0.00 0.00 0.00 0.00 0.00 00:08:16.807 [2024-12-06T17:07:59.105Z] =================================================================================================================== 00:08:16.807 [2024-12-06T17:07:59.105Z] Total : 7777.17 30.38 0.00 0.00 0.00 0.00 0.00 00:08:16.807 00:08:17.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.743 Nvme0n1 : 7.00 7383.14 28.84 0.00 0.00 0.00 0.00 0.00 00:08:17.743 [2024-12-06T17:08:00.041Z] =================================================================================================================== 00:08:17.743 [2024-12-06T17:08:00.041Z] Total : 7383.14 28.84 0.00 0.00 0.00 0.00 0.00 00:08:17.743 00:08:18.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.679 Nvme0n1 : 8.00 7279.62 28.44 0.00 0.00 0.00 0.00 0.00 00:08:18.679 [2024-12-06T17:08:00.977Z] =================================================================================================================== 00:08:18.679 [2024-12-06T17:08:00.977Z] Total : 7279.62 28.44 0.00 0.00 0.00 0.00 0.00 00:08:18.679 00:08:19.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.615 Nvme0n1 : 9.00 7302.67 28.53 0.00 0.00 0.00 0.00 0.00 00:08:19.615 [2024-12-06T17:08:01.913Z] =================================================================================================================== 00:08:19.615 [2024-12-06T17:08:01.913Z] Total : 7302.67 28.53 0.00 0.00 0.00 0.00 0.00 00:08:19.615 00:08:20.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.552 Nvme0n1 : 10.00 7308.20 28.55 0.00 0.00 0.00 0.00 0.00 00:08:20.552 [2024-12-06T17:08:02.850Z] =================================================================================================================== 00:08:20.552 [2024-12-06T17:08:02.850Z] Total : 7308.20 28.55 0.00 0.00 0.00 0.00 0.00 00:08:20.552 00:08:20.552 00:08:20.552 Latency(us) 00:08:20.552 [2024-12-06T17:08:02.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.552 Nvme0n1 : 10.00 7317.92 28.59 0.00 0.00 17484.12 7804.74 297414.28 00:08:20.552 [2024-12-06T17:08:02.850Z] =================================================================================================================== 00:08:20.552 [2024-12-06T17:08:02.850Z] Total : 7317.92 28.59 0.00 0.00 17484.12 7804.74 297414.28 00:08:20.552 { 00:08:20.552 "results": [ 00:08:20.552 { 00:08:20.552 "job": "Nvme0n1", 00:08:20.552 "core_mask": "0x2", 00:08:20.552 "workload": "randwrite", 00:08:20.552 "status": "finished", 00:08:20.552 "queue_depth": 128, 00:08:20.552 "io_size": 4096, 00:08:20.552 "runtime": 10.00421, 00:08:20.552 "iops": 7317.919156035309, 00:08:20.552 "mibps": 28.585621703262927, 00:08:20.552 "io_failed": 0, 00:08:20.552 "io_timeout": 0, 00:08:20.552 "avg_latency_us": 17484.12239838075, 00:08:20.552 "min_latency_us": 7804.741818181818, 00:08:20.552 "max_latency_us": 297414.28363636363 00:08:20.552 } 00:08:20.552 ], 00:08:20.552 "core_count": 1 00:08:20.552 } 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66783 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66783 ']' 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66783 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66783 00:08:20.552 killing process with pid 66783 00:08:20.552 Received shutdown signal, test time was about 10.000000 seconds 00:08:20.552 00:08:20.552 Latency(us) 00:08:20.552 [2024-12-06T17:08:02.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.552 [2024-12-06T17:08:02.850Z] =================================================================================================================== 00:08:20.552 [2024-12-06T17:08:02.850Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66783' 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66783 00:08:20.552 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66783 00:08:20.812 17:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:21.072 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.331 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:21.331 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 66180 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 66180 00:08:21.898 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 66180 Killed "${NVMF_APP[@]}" "$@" 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66986 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66986 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66986 ']' 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.898 17:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:21.898 [2024-12-06 17:08:04.007439] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:21.898 [2024-12-06 17:08:04.007548] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.898 [2024-12-06 17:08:04.154413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.898 [2024-12-06 17:08:04.187591] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.898 [2024-12-06 17:08:04.187651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.898 [2024-12-06 17:08:04.187664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.898 [2024-12-06 17:08:04.187672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.898 [2024-12-06 17:08:04.187679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.898 [2024-12-06 17:08:04.187990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.157 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.157 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:22.157 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.157 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.157 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.157 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.157 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.416 [2024-12-06 17:08:04.623593] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:22.416 [2024-12-06 17:08:04.623861] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:22.416 [2024-12-06 17:08:04.624068] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:22.416 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:22.416 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 10a1ab4a-0094-402a-ac4e-d8717206bb83 00:08:22.416 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=10a1ab4a-0094-402a-ac4e-d8717206bb83 00:08:22.416 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.416 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:22.416 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.416 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.416 17:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:22.989 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10a1ab4a-0094-402a-ac4e-d8717206bb83 -t 2000 00:08:23.252 [ 00:08:23.252 { 00:08:23.252 "aliases": [ 00:08:23.252 "lvs/lvol" 00:08:23.252 ], 00:08:23.252 "assigned_rate_limits": { 00:08:23.252 "r_mbytes_per_sec": 0, 00:08:23.252 "rw_ios_per_sec": 0, 00:08:23.252 "rw_mbytes_per_sec": 0, 00:08:23.252 "w_mbytes_per_sec": 0 00:08:23.252 }, 00:08:23.252 "block_size": 4096, 00:08:23.252 "claimed": false, 00:08:23.252 "driver_specific": { 00:08:23.252 "lvol": { 00:08:23.252 "base_bdev": "aio_bdev", 00:08:23.252 "clone": false, 00:08:23.252 "esnap_clone": false, 00:08:23.252 "lvol_store_uuid": "24c6de82-7f11-4b22-8433-7c8227b4df8e", 00:08:23.252 "num_allocated_clusters": 38, 00:08:23.252 "snapshot": false, 00:08:23.252 "thin_provision": false 00:08:23.252 } 00:08:23.252 }, 00:08:23.252 "name": "10a1ab4a-0094-402a-ac4e-d8717206bb83", 00:08:23.252 "num_blocks": 38912, 00:08:23.252 "product_name": "Logical Volume", 00:08:23.252 "supported_io_types": { 00:08:23.252 "abort": false, 00:08:23.252 "compare": false, 00:08:23.252 "compare_and_write": false, 00:08:23.252 "copy": false, 00:08:23.252 "flush": false, 00:08:23.252 "get_zone_info": false, 00:08:23.252 "nvme_admin": false, 00:08:23.252 "nvme_io": false, 00:08:23.252 "nvme_io_md": false, 00:08:23.252 "nvme_iov_md": false, 00:08:23.252 "read": true, 00:08:23.252 "reset": true, 00:08:23.252 "seek_data": true, 00:08:23.252 "seek_hole": true, 00:08:23.252 "unmap": true, 00:08:23.252 "write": true, 00:08:23.252 "write_zeroes": true, 00:08:23.252 "zcopy": false, 00:08:23.252 "zone_append": false, 00:08:23.252 "zone_management": false 00:08:23.252 }, 00:08:23.252 "uuid": "10a1ab4a-0094-402a-ac4e-d8717206bb83", 00:08:23.252 "zoned": false 00:08:23.252 } 00:08:23.252 ] 00:08:23.253 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:23.253 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:23.253 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:23.510 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:23.510 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:23.510 17:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:23.768 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:23.768 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.026 [2024-12-06 17:08:06.245368] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.026 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:24.027 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:24.594 2024/12/06 17:08:06 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:24c6de82-7f11-4b22-8433-7c8227b4df8e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:08:24.594 request: 00:08:24.594 { 00:08:24.594 "method": "bdev_lvol_get_lvstores", 00:08:24.594 "params": { 00:08:24.594 "uuid": "24c6de82-7f11-4b22-8433-7c8227b4df8e" 00:08:24.594 } 00:08:24.594 } 00:08:24.594 Got JSON-RPC error response 00:08:24.594 GoRPCClient: error on JSON-RPC call 00:08:24.594 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:24.595 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.595 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.595 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.595 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.855 aio_bdev 00:08:24.855 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 10a1ab4a-0094-402a-ac4e-d8717206bb83 00:08:24.855 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=10a1ab4a-0094-402a-ac4e-d8717206bb83 00:08:24.855 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.855 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:24.855 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.855 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.855 17:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.113 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 10a1ab4a-0094-402a-ac4e-d8717206bb83 -t 2000 00:08:25.371 [ 00:08:25.371 { 00:08:25.371 "aliases": [ 00:08:25.371 "lvs/lvol" 00:08:25.371 ], 00:08:25.371 "assigned_rate_limits": { 00:08:25.371 "r_mbytes_per_sec": 0, 00:08:25.371 "rw_ios_per_sec": 0, 00:08:25.371 "rw_mbytes_per_sec": 0, 00:08:25.371 "w_mbytes_per_sec": 0 00:08:25.371 }, 00:08:25.371 "block_size": 4096, 00:08:25.371 "claimed": false, 00:08:25.371 "driver_specific": { 00:08:25.371 "lvol": { 00:08:25.371 "base_bdev": "aio_bdev", 00:08:25.371 "clone": false, 00:08:25.371 "esnap_clone": false, 00:08:25.371 "lvol_store_uuid": "24c6de82-7f11-4b22-8433-7c8227b4df8e", 00:08:25.371 "num_allocated_clusters": 38, 00:08:25.371 "snapshot": false, 00:08:25.371 "thin_provision": false 00:08:25.371 } 00:08:25.371 }, 00:08:25.371 "name": "10a1ab4a-0094-402a-ac4e-d8717206bb83", 00:08:25.371 "num_blocks": 38912, 00:08:25.371 "product_name": "Logical Volume", 00:08:25.371 "supported_io_types": { 00:08:25.371 "abort": false, 00:08:25.371 "compare": false, 00:08:25.371 "compare_and_write": false, 00:08:25.371 "copy": false, 00:08:25.371 "flush": false, 00:08:25.371 "get_zone_info": false, 00:08:25.371 "nvme_admin": false, 00:08:25.371 "nvme_io": false, 00:08:25.371 "nvme_io_md": false, 00:08:25.371 "nvme_iov_md": false, 00:08:25.371 "read": true, 00:08:25.371 "reset": true, 00:08:25.371 "seek_data": true, 00:08:25.371 "seek_hole": true, 00:08:25.371 "unmap": true, 00:08:25.371 "write": true, 00:08:25.371 "write_zeroes": true, 00:08:25.371 "zcopy": false, 00:08:25.371 "zone_append": false, 00:08:25.371 "zone_management": false 00:08:25.371 }, 00:08:25.371 "uuid": "10a1ab4a-0094-402a-ac4e-d8717206bb83", 00:08:25.371 "zoned": false 00:08:25.371 } 00:08:25.371 ] 00:08:25.371 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:25.371 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:25.371 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:25.628 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.628 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:25.628 17:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:25.893 17:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:25.893 17:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 10a1ab4a-0094-402a-ac4e-d8717206bb83 00:08:26.161 17:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 24c6de82-7f11-4b22-8433-7c8227b4df8e 00:08:26.729 17:08:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.988 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:27.555 ************************************ 00:08:27.555 END TEST lvs_grow_dirty 00:08:27.555 ************************************ 00:08:27.555 00:08:27.555 real 0m21.304s 00:08:27.555 user 0m43.708s 00:08:27.555 sys 0m7.866s 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:27.555 nvmf_trace.0 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:27.555 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:27.814 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:27.814 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:27.814 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:27.814 17:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:27.814 rmmod nvme_tcp 00:08:27.814 rmmod nvme_fabrics 00:08:27.814 rmmod nvme_keyring 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66986 ']' 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66986 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66986 ']' 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66986 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66986 00:08:27.814 killing process with pid 66986 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66986' 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66986 00:08:27.814 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66986 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:28.073 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:08:28.331 00:08:28.331 real 0m42.203s 00:08:28.331 user 1m8.565s 00:08:28.331 sys 0m10.978s 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.331 ************************************ 00:08:28.331 END TEST nvmf_lvs_grow 00:08:28.331 ************************************ 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.331 ************************************ 00:08:28.331 START TEST nvmf_bdev_io_wait 00:08:28.331 ************************************ 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:28.331 * Looking for test storage... 00:08:28.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.331 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.592 --rc genhtml_branch_coverage=1 00:08:28.592 --rc genhtml_function_coverage=1 00:08:28.592 --rc genhtml_legend=1 00:08:28.592 --rc geninfo_all_blocks=1 00:08:28.592 --rc geninfo_unexecuted_blocks=1 00:08:28.592 00:08:28.592 ' 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.592 --rc genhtml_branch_coverage=1 00:08:28.592 --rc genhtml_function_coverage=1 00:08:28.592 --rc genhtml_legend=1 00:08:28.592 --rc geninfo_all_blocks=1 00:08:28.592 --rc geninfo_unexecuted_blocks=1 00:08:28.592 00:08:28.592 ' 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.592 --rc genhtml_branch_coverage=1 00:08:28.592 --rc genhtml_function_coverage=1 00:08:28.592 --rc genhtml_legend=1 00:08:28.592 --rc geninfo_all_blocks=1 00:08:28.592 --rc geninfo_unexecuted_blocks=1 00:08:28.592 00:08:28.592 ' 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.592 --rc genhtml_branch_coverage=1 00:08:28.592 --rc genhtml_function_coverage=1 00:08:28.592 --rc genhtml_legend=1 00:08:28.592 --rc geninfo_all_blocks=1 00:08:28.592 --rc geninfo_unexecuted_blocks=1 00:08:28.592 00:08:28.592 ' 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.592 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.593 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:28.593 Cannot find device "nvmf_init_br" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:28.593 Cannot find device "nvmf_init_br2" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:28.593 Cannot find device "nvmf_tgt_br" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:28.593 Cannot find device "nvmf_tgt_br2" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:28.593 Cannot find device "nvmf_init_br" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:28.593 Cannot find device "nvmf_init_br2" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:28.593 Cannot find device "nvmf_tgt_br" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:28.593 Cannot find device "nvmf_tgt_br2" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:28.593 Cannot find device "nvmf_br" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:28.593 Cannot find device "nvmf_init_if" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:28.593 Cannot find device "nvmf_init_if2" 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:28.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:28.593 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:28.593 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:28.851 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:28.852 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:28.852 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:28.852 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:28.852 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:28.852 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:28.852 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:28.852 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:28.852 17:08:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:28.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:28.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:08:28.852 00:08:28.852 --- 10.0.0.3 ping statistics --- 00:08:28.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.852 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:28.852 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:28.852 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:08:28.852 00:08:28.852 --- 10.0.0.4 ping statistics --- 00:08:28.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.852 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:28.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:08:28.852 00:08:28.852 --- 10.0.0.1 ping statistics --- 00:08:28.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.852 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:28.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:08:28.852 00:08:28.852 --- 10.0.0.2 ping statistics --- 00:08:28.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.852 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=67454 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 67454 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 67454 ']' 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.852 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.109 [2024-12-06 17:08:11.189531] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:29.109 [2024-12-06 17:08:11.189629] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.109 [2024-12-06 17:08:11.341863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.109 [2024-12-06 17:08:11.382892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.109 [2024-12-06 17:08:11.382955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.109 [2024-12-06 17:08:11.382968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.109 [2024-12-06 17:08:11.382979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.109 [2024-12-06 17:08:11.382988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.109 [2024-12-06 17:08:11.383981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.109 [2024-12-06 17:08:11.384118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.109 [2024-12-06 17:08:11.384203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.109 [2024-12-06 17:08:11.384205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 [2024-12-06 17:08:11.554161] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 Malloc0 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.367 [2024-12-06 17:08:11.603798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=67488 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:29.367 { 00:08:29.367 "params": { 00:08:29.367 "name": "Nvme$subsystem", 00:08:29.367 "trtype": "$TEST_TRANSPORT", 00:08:29.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.367 "adrfam": "ipv4", 00:08:29.367 "trsvcid": "$NVMF_PORT", 00:08:29.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.367 "hdgst": ${hdgst:-false}, 00:08:29.367 "ddgst": ${ddgst:-false} 00:08:29.367 }, 00:08:29.367 "method": "bdev_nvme_attach_controller" 00:08:29.367 } 00:08:29.367 EOF 00:08:29.367 )") 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=67490 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:29.367 { 00:08:29.367 "params": { 00:08:29.367 "name": "Nvme$subsystem", 00:08:29.367 "trtype": "$TEST_TRANSPORT", 00:08:29.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.367 "adrfam": "ipv4", 00:08:29.367 "trsvcid": "$NVMF_PORT", 00:08:29.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.367 "hdgst": ${hdgst:-false}, 00:08:29.367 "ddgst": ${ddgst:-false} 00:08:29.367 }, 00:08:29.367 "method": "bdev_nvme_attach_controller" 00:08:29.367 } 00:08:29.367 EOF 00:08:29.367 )") 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=67494 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=67497 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:29.367 { 00:08:29.367 "params": { 00:08:29.367 "name": "Nvme$subsystem", 00:08:29.367 "trtype": "$TEST_TRANSPORT", 00:08:29.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.367 "adrfam": "ipv4", 00:08:29.367 "trsvcid": "$NVMF_PORT", 00:08:29.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.367 "hdgst": ${hdgst:-false}, 00:08:29.367 "ddgst": ${ddgst:-false} 00:08:29.367 }, 00:08:29.367 "method": "bdev_nvme_attach_controller" 00:08:29.367 } 00:08:29.367 EOF 00:08:29.367 )") 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:29.367 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:29.367 { 00:08:29.367 "params": { 00:08:29.367 "name": "Nvme$subsystem", 00:08:29.367 "trtype": "$TEST_TRANSPORT", 00:08:29.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:29.367 "adrfam": "ipv4", 00:08:29.368 "trsvcid": "$NVMF_PORT", 00:08:29.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:29.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:29.368 "hdgst": ${hdgst:-false}, 00:08:29.368 "ddgst": ${ddgst:-false} 00:08:29.368 }, 00:08:29.368 "method": "bdev_nvme_attach_controller" 00:08:29.368 } 00:08:29.368 EOF 00:08:29.368 )") 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:29.368 "params": { 00:08:29.368 "name": "Nvme1", 00:08:29.368 "trtype": "tcp", 00:08:29.368 "traddr": "10.0.0.3", 00:08:29.368 "adrfam": "ipv4", 00:08:29.368 "trsvcid": "4420", 00:08:29.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.368 "hdgst": false, 00:08:29.368 "ddgst": false 00:08:29.368 }, 00:08:29.368 "method": "bdev_nvme_attach_controller" 00:08:29.368 }' 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:29.368 "params": { 00:08:29.368 "name": "Nvme1", 00:08:29.368 "trtype": "tcp", 00:08:29.368 "traddr": "10.0.0.3", 00:08:29.368 "adrfam": "ipv4", 00:08:29.368 "trsvcid": "4420", 00:08:29.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.368 "hdgst": false, 00:08:29.368 "ddgst": false 00:08:29.368 }, 00:08:29.368 "method": "bdev_nvme_attach_controller" 00:08:29.368 }' 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:29.368 "params": { 00:08:29.368 "name": "Nvme1", 00:08:29.368 "trtype": "tcp", 00:08:29.368 "traddr": "10.0.0.3", 00:08:29.368 "adrfam": "ipv4", 00:08:29.368 "trsvcid": "4420", 00:08:29.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.368 "hdgst": false, 00:08:29.368 "ddgst": false 00:08:29.368 }, 00:08:29.368 "method": "bdev_nvme_attach_controller" 00:08:29.368 }' 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:29.368 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:29.368 "params": { 00:08:29.368 "name": "Nvme1", 00:08:29.368 "trtype": "tcp", 00:08:29.368 "traddr": "10.0.0.3", 00:08:29.368 "adrfam": "ipv4", 00:08:29.368 "trsvcid": "4420", 00:08:29.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:29.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:29.368 "hdgst": false, 00:08:29.368 "ddgst": false 00:08:29.368 }, 00:08:29.368 "method": "bdev_nvme_attach_controller" 00:08:29.368 }' 00:08:29.625 17:08:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 67488 00:08:29.625 [2024-12-06 17:08:11.684606] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:29.625 [2024-12-06 17:08:11.684694] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:29.625 [2024-12-06 17:08:11.694679] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:29.625 [2024-12-06 17:08:11.694769] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:29.625 [2024-12-06 17:08:11.696200] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:29.625 [2024-12-06 17:08:11.696740] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:29.625 [2024-12-06 17:08:11.705725] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:29.625 [2024-12-06 17:08:11.705799] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:29.625 [2024-12-06 17:08:11.888441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.625 [2024-12-06 17:08:11.919654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:29.883 [2024-12-06 17:08:11.927936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.883 [2024-12-06 17:08:11.958770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:29.883 [2024-12-06 17:08:11.973846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.883 [2024-12-06 17:08:12.004325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:29.883 [2024-12-06 17:08:12.015595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.883 Running I/O for 1 seconds... 00:08:29.883 [2024-12-06 17:08:12.046651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:29.883 Running I/O for 1 seconds... 00:08:29.883 Running I/O for 1 seconds... 00:08:29.883 Running I/O for 1 seconds... 00:08:30.813 176232.00 IOPS, 688.41 MiB/s 00:08:30.813 Latency(us) 00:08:30.813 [2024-12-06T17:08:13.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.813 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:30.813 Nvme1n1 : 1.00 175883.66 687.05 0.00 0.00 723.79 297.89 1951.19 00:08:30.813 [2024-12-06T17:08:13.111Z] =================================================================================================================== 00:08:30.813 [2024-12-06T17:08:13.111Z] Total : 175883.66 687.05 0.00 0.00 723.79 297.89 1951.19 00:08:30.813 9979.00 IOPS, 38.98 MiB/s 00:08:30.813 Latency(us) 00:08:30.813 [2024-12-06T17:08:13.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.813 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:30.813 Nvme1n1 : 1.01 10032.84 39.19 0.00 0.00 12702.89 6613.18 18945.86 00:08:30.813 [2024-12-06T17:08:13.111Z] =================================================================================================================== 00:08:30.813 [2024-12-06T17:08:13.111Z] Total : 10032.84 39.19 0.00 0.00 12702.89 6613.18 18945.86 00:08:31.071 7412.00 IOPS, 28.95 MiB/s 00:08:31.071 Latency(us) 00:08:31.071 [2024-12-06T17:08:13.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.071 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:31.071 Nvme1n1 : 1.01 7458.32 29.13 0.00 0.00 17059.75 9711.24 25856.93 00:08:31.071 [2024-12-06T17:08:13.369Z] =================================================================================================================== 00:08:31.071 [2024-12-06T17:08:13.369Z] Total : 7458.32 29.13 0.00 0.00 17059.75 9711.24 25856.93 00:08:31.071 7620.00 IOPS, 29.77 MiB/s 00:08:31.071 Latency(us) 00:08:31.071 [2024-12-06T17:08:13.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.071 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:31.071 Nvme1n1 : 1.01 7698.88 30.07 0.00 0.00 16552.90 5570.56 24903.68 00:08:31.071 [2024-12-06T17:08:13.369Z] =================================================================================================================== 00:08:31.071 [2024-12-06T17:08:13.369Z] Total : 7698.88 30.07 0.00 0.00 16552.90 5570.56 24903.68 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 67490 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 67494 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 67497 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:31.071 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:31.071 rmmod nvme_tcp 00:08:31.329 rmmod nvme_fabrics 00:08:31.329 rmmod nvme_keyring 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 67454 ']' 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 67454 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 67454 ']' 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 67454 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67454 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.329 killing process with pid 67454 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67454' 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 67454 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 67454 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:31.329 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:08:31.588 ************************************ 00:08:31.588 END TEST nvmf_bdev_io_wait 00:08:31.588 ************************************ 00:08:31.588 00:08:31.588 real 0m3.326s 00:08:31.588 user 0m13.049s 00:08:31.588 sys 0m1.940s 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.588 ************************************ 00:08:31.588 START TEST nvmf_queue_depth 00:08:31.588 ************************************ 00:08:31.588 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:31.848 * Looking for test storage... 00:08:31.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:31.848 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:31.848 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:31.848 17:08:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:31.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.848 --rc genhtml_branch_coverage=1 00:08:31.848 --rc genhtml_function_coverage=1 00:08:31.848 --rc genhtml_legend=1 00:08:31.848 --rc geninfo_all_blocks=1 00:08:31.848 --rc geninfo_unexecuted_blocks=1 00:08:31.848 00:08:31.848 ' 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:31.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.848 --rc genhtml_branch_coverage=1 00:08:31.848 --rc genhtml_function_coverage=1 00:08:31.848 --rc genhtml_legend=1 00:08:31.848 --rc geninfo_all_blocks=1 00:08:31.848 --rc geninfo_unexecuted_blocks=1 00:08:31.848 00:08:31.848 ' 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:31.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.848 --rc genhtml_branch_coverage=1 00:08:31.848 --rc genhtml_function_coverage=1 00:08:31.848 --rc genhtml_legend=1 00:08:31.848 --rc geninfo_all_blocks=1 00:08:31.848 --rc geninfo_unexecuted_blocks=1 00:08:31.848 00:08:31.848 ' 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:31.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.848 --rc genhtml_branch_coverage=1 00:08:31.848 --rc genhtml_function_coverage=1 00:08:31.848 --rc genhtml_legend=1 00:08:31.848 --rc geninfo_all_blocks=1 00:08:31.848 --rc geninfo_unexecuted_blocks=1 00:08:31.848 00:08:31.848 ' 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.848 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:31.849 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:31.849 Cannot find device "nvmf_init_br" 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:31.849 Cannot find device "nvmf_init_br2" 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:31.849 Cannot find device "nvmf_tgt_br" 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:31.849 Cannot find device "nvmf_tgt_br2" 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:31.849 Cannot find device "nvmf_init_br" 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:31.849 Cannot find device "nvmf_init_br2" 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:08:31.849 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:32.108 Cannot find device "nvmf_tgt_br" 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:32.108 Cannot find device "nvmf_tgt_br2" 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:32.108 Cannot find device "nvmf_br" 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:32.108 Cannot find device "nvmf_init_if" 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:32.108 Cannot find device "nvmf_init_if2" 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.108 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:32.108 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:32.109 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:32.368 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:32.368 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:08:32.368 00:08:32.368 --- 10.0.0.3 ping statistics --- 00:08:32.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.368 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:32.368 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:32.368 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:08:32.368 00:08:32.368 --- 10.0.0.4 ping statistics --- 00:08:32.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.368 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:32.368 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.368 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:32.368 00:08:32.368 --- 10.0.0.1 ping statistics --- 00:08:32.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.368 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:32.368 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.368 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:08:32.368 00:08:32.368 --- 10.0.0.2 ping statistics --- 00:08:32.368 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.368 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67754 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67754 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67754 ']' 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.368 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.368 [2024-12-06 17:08:14.534687] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:32.368 [2024-12-06 17:08:14.534790] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.628 [2024-12-06 17:08:14.684246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.628 [2024-12-06 17:08:14.730010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.628 [2024-12-06 17:08:14.730109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.628 [2024-12-06 17:08:14.730144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:32.628 [2024-12-06 17:08:14.730158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:32.628 [2024-12-06 17:08:14.730171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.628 [2024-12-06 17:08:14.730581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.628 [2024-12-06 17:08:14.866649] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.628 Malloc0 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.628 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.629 [2024-12-06 17:08:14.910225] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67789 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67789 /var/tmp/bdevperf.sock 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67789 ']' 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.629 17:08:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:32.888 [2024-12-06 17:08:14.976452] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:32.888 [2024-12-06 17:08:14.976578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67789 ] 00:08:32.888 [2024-12-06 17:08:15.126031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.888 [2024-12-06 17:08:15.168978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.146 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.146 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:33.146 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:33.146 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.146 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:33.146 NVMe0n1 00:08:33.146 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.146 17:08:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:33.404 Running I/O for 10 seconds... 00:08:35.273 7191.00 IOPS, 28.09 MiB/s [2024-12-06T17:08:18.510Z] 7657.50 IOPS, 29.91 MiB/s [2024-12-06T17:08:19.902Z] 7747.00 IOPS, 30.26 MiB/s [2024-12-06T17:08:20.481Z] 7681.00 IOPS, 30.00 MiB/s [2024-12-06T17:08:21.857Z] 7780.40 IOPS, 30.39 MiB/s [2024-12-06T17:08:22.792Z] 7850.17 IOPS, 30.66 MiB/s [2024-12-06T17:08:23.726Z] 7900.29 IOPS, 30.86 MiB/s [2024-12-06T17:08:24.660Z] 7938.12 IOPS, 31.01 MiB/s [2024-12-06T17:08:25.597Z] 7964.89 IOPS, 31.11 MiB/s [2024-12-06T17:08:25.597Z] 8021.00 IOPS, 31.33 MiB/s 00:08:43.299 Latency(us) 00:08:43.299 [2024-12-06T17:08:25.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.299 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:43.299 Verification LBA range: start 0x0 length 0x4000 00:08:43.299 NVMe0n1 : 10.08 8062.01 31.49 0.00 0.00 126378.85 23473.80 117726.49 00:08:43.299 [2024-12-06T17:08:25.597Z] =================================================================================================================== 00:08:43.299 [2024-12-06T17:08:25.597Z] Total : 8062.01 31.49 0.00 0.00 126378.85 23473.80 117726.49 00:08:43.299 { 00:08:43.299 "results": [ 00:08:43.299 { 00:08:43.299 "job": "NVMe0n1", 00:08:43.299 "core_mask": "0x1", 00:08:43.299 "workload": "verify", 00:08:43.299 "status": "finished", 00:08:43.299 "verify_range": { 00:08:43.299 "start": 0, 00:08:43.299 "length": 16384 00:08:43.299 }, 00:08:43.299 "queue_depth": 1024, 00:08:43.299 "io_size": 4096, 00:08:43.299 "runtime": 10.077633, 00:08:43.299 "iops": 8062.012180836512, 00:08:43.299 "mibps": 31.492235081392625, 00:08:43.299 "io_failed": 0, 00:08:43.299 "io_timeout": 0, 00:08:43.299 "avg_latency_us": 126378.85386594696, 00:08:43.299 "min_latency_us": 23473.803636363635, 00:08:43.299 "max_latency_us": 117726.48727272727 00:08:43.299 } 00:08:43.299 ], 00:08:43.299 "core_count": 1 00:08:43.299 } 00:08:43.299 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67789 00:08:43.299 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67789 ']' 00:08:43.299 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67789 00:08:43.299 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:43.299 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.299 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67789 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.558 killing process with pid 67789 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67789' 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67789 00:08:43.558 Received shutdown signal, test time was about 10.000000 seconds 00:08:43.558 00:08:43.558 Latency(us) 00:08:43.558 [2024-12-06T17:08:25.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.558 [2024-12-06T17:08:25.856Z] =================================================================================================================== 00:08:43.558 [2024-12-06T17:08:25.856Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67789 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.558 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:43.815 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.815 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:43.815 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.816 rmmod nvme_tcp 00:08:43.816 rmmod nvme_fabrics 00:08:43.816 rmmod nvme_keyring 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67754 ']' 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67754 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67754 ']' 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67754 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67754 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.816 killing process with pid 67754 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67754' 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67754 00:08:43.816 17:08:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67754 00:08:44.073 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:44.073 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:44.073 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.074 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:44.332 00:08:44.332 real 0m12.507s 00:08:44.332 user 0m21.177s 00:08:44.332 sys 0m2.019s 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.332 ************************************ 00:08:44.332 END TEST nvmf_queue_depth 00:08:44.332 ************************************ 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.332 ************************************ 00:08:44.332 START TEST nvmf_target_multipath 00:08:44.332 ************************************ 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:44.332 * Looking for test storage... 00:08:44.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.332 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:44.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.333 --rc genhtml_branch_coverage=1 00:08:44.333 --rc genhtml_function_coverage=1 00:08:44.333 --rc genhtml_legend=1 00:08:44.333 --rc geninfo_all_blocks=1 00:08:44.333 --rc geninfo_unexecuted_blocks=1 00:08:44.333 00:08:44.333 ' 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:44.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.333 --rc genhtml_branch_coverage=1 00:08:44.333 --rc genhtml_function_coverage=1 00:08:44.333 --rc genhtml_legend=1 00:08:44.333 --rc geninfo_all_blocks=1 00:08:44.333 --rc geninfo_unexecuted_blocks=1 00:08:44.333 00:08:44.333 ' 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:44.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.333 --rc genhtml_branch_coverage=1 00:08:44.333 --rc genhtml_function_coverage=1 00:08:44.333 --rc genhtml_legend=1 00:08:44.333 --rc geninfo_all_blocks=1 00:08:44.333 --rc geninfo_unexecuted_blocks=1 00:08:44.333 00:08:44.333 ' 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:44.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.333 --rc genhtml_branch_coverage=1 00:08:44.333 --rc genhtml_function_coverage=1 00:08:44.333 --rc genhtml_legend=1 00:08:44.333 --rc geninfo_all_blocks=1 00:08:44.333 --rc geninfo_unexecuted_blocks=1 00:08:44.333 00:08:44.333 ' 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.333 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.353 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.353 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.353 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.353 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.353 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.353 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:44.612 Cannot find device "nvmf_init_br" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:44.612 Cannot find device "nvmf_init_br2" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:44.612 Cannot find device "nvmf_tgt_br" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.612 Cannot find device "nvmf_tgt_br2" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:44.612 Cannot find device "nvmf_init_br" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:44.612 Cannot find device "nvmf_init_br2" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:44.612 Cannot find device "nvmf_tgt_br" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:44.612 Cannot find device "nvmf_tgt_br2" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:44.612 Cannot find device "nvmf_br" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:44.612 Cannot find device "nvmf_init_if" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:44.612 Cannot find device "nvmf_init_if2" 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.612 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.613 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.872 17:08:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:44.872 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:44.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:08:44.872 00:08:44.872 --- 10.0.0.3 ping statistics --- 00:08:44.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.872 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:08:44.872 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:44.872 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:44.872 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:08:44.872 00:08:44.872 --- 10.0.0.4 ping statistics --- 00:08:44.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.872 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:44.872 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:44.872 00:08:44.872 --- 10.0.0.1 ping statistics --- 00:08:44.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.872 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:44.872 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:44.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:08:44.873 00:08:44.873 --- 10.0.0.2 ping statistics --- 00:08:44.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.873 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=68166 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 68166 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 68166 ']' 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.873 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:44.873 [2024-12-06 17:08:27.110096] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:08:44.873 [2024-12-06 17:08:27.110207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.131 [2024-12-06 17:08:27.264609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.131 [2024-12-06 17:08:27.306189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.131 [2024-12-06 17:08:27.306262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.131 [2024-12-06 17:08:27.306313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.131 [2024-12-06 17:08:27.306329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.131 [2024-12-06 17:08:27.306343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.131 [2024-12-06 17:08:27.307320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.131 [2024-12-06 17:08:27.307432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.131 [2024-12-06 17:08:27.307567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.131 [2024-12-06 17:08:27.307584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.131 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.131 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:08:45.131 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.131 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.131 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:45.389 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.389 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:45.647 [2024-12-06 17:08:27.752792] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.647 17:08:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:45.904 Malloc0 00:08:45.904 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:46.161 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:46.446 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:46.704 [2024-12-06 17:08:28.911607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:46.704 17:08:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:46.962 [2024-12-06 17:08:29.179851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:46.962 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:47.219 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:47.476 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:47.476 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:08:47.476 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:47.476 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:47.476 17:08:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=68296 00:08:49.375 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:49.376 17:08:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:49.376 [global] 00:08:49.376 thread=1 00:08:49.376 invalidate=1 00:08:49.376 rw=randrw 00:08:49.376 time_based=1 00:08:49.376 runtime=6 00:08:49.376 ioengine=libaio 00:08:49.376 direct=1 00:08:49.376 bs=4096 00:08:49.376 iodepth=128 00:08:49.376 norandommap=0 00:08:49.376 numjobs=1 00:08:49.376 00:08:49.634 verify_dump=1 00:08:49.634 verify_backlog=512 00:08:49.634 verify_state_save=0 00:08:49.634 do_verify=1 00:08:49.634 verify=crc32c-intel 00:08:49.634 [job0] 00:08:49.634 filename=/dev/nvme0n1 00:08:49.634 Could not set queue depth (nvme0n1) 00:08:49.634 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.634 fio-3.35 00:08:49.634 Starting 1 thread 00:08:50.569 17:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:50.827 17:08:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:51.086 17:08:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:52.020 17:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:52.020 17:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:52.020 17:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:52.020 17:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:52.586 17:08:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:52.844 17:08:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:53.778 17:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:53.778 17:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:53.778 17:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:53.778 17:08:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 68296 00:08:56.311 00:08:56.311 job0: (groupid=0, jobs=1): err= 0: pid=68317: Fri Dec 6 17:08:37 2024 00:08:56.311 read: IOPS=10.8k, BW=42.0MiB/s (44.0MB/s)(252MiB/6006msec) 00:08:56.311 slat (usec): min=2, max=8438, avg=52.99, stdev=244.17 00:08:56.311 clat (usec): min=1119, max=17151, avg=8133.65, stdev=1291.90 00:08:56.311 lat (usec): min=1393, max=17172, avg=8186.64, stdev=1302.56 00:08:56.311 clat percentiles (usec): 00:08:56.311 | 1.00th=[ 4817], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 7373], 00:08:56.311 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8225], 00:08:56.311 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[10421], 00:08:56.311 | 99.00th=[12387], 99.50th=[13304], 99.90th=[14877], 99.95th=[15533], 00:08:56.311 | 99.99th=[16581] 00:08:56.311 bw ( KiB/s): min= 8168, max=28392, per=51.83%, avg=22294.73, stdev=6223.06, samples=11 00:08:56.311 iops : min= 2042, max= 7098, avg=5573.64, stdev=1555.73, samples=11 00:08:56.311 write: IOPS=6379, BW=24.9MiB/s (26.1MB/s)(132MiB/5314msec); 0 zone resets 00:08:56.311 slat (usec): min=4, max=2795, avg=65.39, stdev=165.50 00:08:56.311 clat (usec): min=880, max=15445, avg=6980.99, stdev=1070.28 00:08:56.311 lat (usec): min=943, max=15470, avg=7046.38, stdev=1075.15 00:08:56.311 clat percentiles (usec): 00:08:56.311 | 1.00th=[ 3884], 5.00th=[ 5145], 10.00th=[ 5932], 20.00th=[ 6390], 00:08:56.311 | 30.00th=[ 6652], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7177], 00:08:56.311 | 70.00th=[ 7373], 80.00th=[ 7570], 90.00th=[ 7963], 95.00th=[ 8455], 00:08:56.311 | 99.00th=[10290], 99.50th=[11207], 99.90th=[13304], 99.95th=[14353], 00:08:56.311 | 99.99th=[15270] 00:08:56.311 bw ( KiB/s): min= 8752, max=27872, per=87.57%, avg=22347.27, stdev=5888.27, samples=11 00:08:56.311 iops : min= 2188, max= 6968, avg=5586.82, stdev=1472.07, samples=11 00:08:56.311 lat (usec) : 1000=0.01% 00:08:56.311 lat (msec) : 2=0.02%, 4=0.55%, 10=94.69%, 20=4.73% 00:08:56.311 cpu : usr=5.30%, sys=23.46%, ctx=6312, majf=0, minf=102 00:08:56.311 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:56.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:56.311 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:56.311 issued rwts: total=64580,33902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:56.311 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:56.311 00:08:56.311 Run status group 0 (all jobs): 00:08:56.311 READ: bw=42.0MiB/s (44.0MB/s), 42.0MiB/s-42.0MiB/s (44.0MB/s-44.0MB/s), io=252MiB (265MB), run=6006-6006msec 00:08:56.311 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=132MiB (139MB), run=5314-5314msec 00:08:56.311 00:08:56.311 Disk stats (read/write): 00:08:56.311 nvme0n1: ios=63669/33263, merge=0/0, ticks=485938/217745, in_queue=703683, util=98.48% 00:08:56.311 17:08:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:56.311 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:08:56.570 17:08:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:08:57.507 17:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:08:57.507 17:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:57.507 17:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:57.507 17:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:57.507 17:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=68450 00:08:57.507 17:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:57.507 17:08:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:57.507 [global] 00:08:57.507 thread=1 00:08:57.507 invalidate=1 00:08:57.507 rw=randrw 00:08:57.507 time_based=1 00:08:57.507 runtime=6 00:08:57.507 ioengine=libaio 00:08:57.507 direct=1 00:08:57.507 bs=4096 00:08:57.507 iodepth=128 00:08:57.507 norandommap=0 00:08:57.507 numjobs=1 00:08:57.507 00:08:57.507 verify_dump=1 00:08:57.507 verify_backlog=512 00:08:57.507 verify_state_save=0 00:08:57.507 do_verify=1 00:08:57.507 verify=crc32c-intel 00:08:57.507 [job0] 00:08:57.507 filename=/dev/nvme0n1 00:08:57.507 Could not set queue depth (nvme0n1) 00:08:57.507 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:57.507 fio-3.35 00:08:57.507 Starting 1 thread 00:08:58.442 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:58.699 17:08:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:58.956 17:08:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:00.339 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:00.339 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:00.339 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:00.339 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:00.339 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:00.597 17:08:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:09:01.970 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:09:01.970 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:01.970 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:01.970 17:08:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 68450 00:09:03.869 00:09:03.869 job0: (groupid=0, jobs=1): err= 0: pid=68477: Fri Dec 6 17:08:45 2024 00:09:03.869 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(282MiB/6004msec) 00:09:03.869 slat (usec): min=3, max=7653, avg=42.33, stdev=210.70 00:09:03.869 clat (usec): min=211, max=45603, avg=7367.39, stdev=1996.15 00:09:03.869 lat (usec): min=240, max=45618, avg=7409.72, stdev=2011.50 00:09:03.869 clat percentiles (usec): 00:09:03.869 | 1.00th=[ 1631], 5.00th=[ 3752], 10.00th=[ 4686], 20.00th=[ 5932], 00:09:03.891 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7832], 00:09:03.891 | 70.00th=[ 8160], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10290], 00:09:03.891 | 99.00th=[11994], 99.50th=[12649], 99.90th=[15008], 99.95th=[15795], 00:09:03.891 | 99.99th=[44303] 00:09:03.891 bw ( KiB/s): min= 8544, max=39232, per=54.11%, avg=26031.27, stdev=9631.27, samples=11 00:09:03.891 iops : min= 2136, max= 9808, avg=6507.82, stdev=2407.82, samples=11 00:09:03.891 write: IOPS=7188, BW=28.1MiB/s (29.4MB/s)(148MiB/5275msec); 0 zone resets 00:09:03.891 slat (usec): min=4, max=2806, avg=53.27, stdev=132.00 00:09:03.891 clat (usec): min=215, max=15846, avg=6067.18, stdev=1779.96 00:09:03.891 lat (usec): min=255, max=15890, avg=6120.44, stdev=1791.38 00:09:03.891 clat percentiles (usec): 00:09:03.891 | 1.00th=[ 1237], 5.00th=[ 2900], 10.00th=[ 3523], 20.00th=[ 4293], 00:09:03.891 | 30.00th=[ 5276], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 6849], 00:09:03.891 | 70.00th=[ 7111], 80.00th=[ 7373], 90.00th=[ 7767], 95.00th=[ 8225], 00:09:03.891 | 99.00th=[10290], 99.50th=[10683], 99.90th=[12387], 99.95th=[12911], 00:09:03.891 | 99.99th=[13698] 00:09:03.891 bw ( KiB/s): min= 9072, max=39840, per=90.40%, avg=25993.45, stdev=9442.83, samples=11 00:09:03.891 iops : min= 2268, max= 9960, avg=6498.36, stdev=2360.71, samples=11 00:09:03.891 lat (usec) : 250=0.01%, 500=0.05%, 750=0.11%, 1000=0.20% 00:09:03.891 lat (msec) : 2=1.30%, 4=7.70%, 10=86.48%, 20=4.14%, 50=0.01% 00:09:03.891 cpu : usr=6.26%, sys=25.45%, ctx=7630, majf=0, minf=102 00:09:03.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:09:03.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.891 issued rwts: total=72206,37919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.891 00:09:03.891 Run status group 0 (all jobs): 00:09:03.891 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=282MiB (296MB), run=6004-6004msec 00:09:03.891 WRITE: bw=28.1MiB/s (29.4MB/s), 28.1MiB/s-28.1MiB/s (29.4MB/s-29.4MB/s), io=148MiB (155MB), run=5275-5275msec 00:09:03.891 00:09:03.891 Disk stats (read/write): 00:09:03.891 nvme0n1: ios=70753/37919, merge=0/0, ticks=485740/212523, in_queue=698263, util=98.62% 00:09:03.891 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:03.891 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.891 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:09:03.891 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:03.891 17:08:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.891 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:03.891 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.891 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:09:03.891 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.150 rmmod nvme_tcp 00:09:04.150 rmmod nvme_fabrics 00:09:04.150 rmmod nvme_keyring 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 68166 ']' 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 68166 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 68166 ']' 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 68166 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:09:04.150 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68166 00:09:04.410 killing process with pid 68166 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68166' 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 68166 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 68166 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:04.410 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:09:04.696 00:09:04.696 real 0m20.452s 00:09:04.696 user 1m19.890s 00:09:04.696 sys 0m6.575s 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:04.696 ************************************ 00:09:04.696 END TEST nvmf_target_multipath 00:09:04.696 ************************************ 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.696 ************************************ 00:09:04.696 START TEST nvmf_zcopy 00:09:04.696 ************************************ 00:09:04.696 17:08:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:04.970 * Looking for test storage... 00:09:04.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:04.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.970 --rc genhtml_branch_coverage=1 00:09:04.970 --rc genhtml_function_coverage=1 00:09:04.970 --rc genhtml_legend=1 00:09:04.970 --rc geninfo_all_blocks=1 00:09:04.970 --rc geninfo_unexecuted_blocks=1 00:09:04.970 00:09:04.970 ' 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:04.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.970 --rc genhtml_branch_coverage=1 00:09:04.970 --rc genhtml_function_coverage=1 00:09:04.970 --rc genhtml_legend=1 00:09:04.970 --rc geninfo_all_blocks=1 00:09:04.970 --rc geninfo_unexecuted_blocks=1 00:09:04.970 00:09:04.970 ' 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:04.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.970 --rc genhtml_branch_coverage=1 00:09:04.970 --rc genhtml_function_coverage=1 00:09:04.970 --rc genhtml_legend=1 00:09:04.970 --rc geninfo_all_blocks=1 00:09:04.970 --rc geninfo_unexecuted_blocks=1 00:09:04.970 00:09:04.970 ' 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:04.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.970 --rc genhtml_branch_coverage=1 00:09:04.970 --rc genhtml_function_coverage=1 00:09:04.970 --rc genhtml_legend=1 00:09:04.970 --rc geninfo_all_blocks=1 00:09:04.970 --rc geninfo_unexecuted_blocks=1 00:09:04.970 00:09:04.970 ' 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.970 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.971 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:04.971 Cannot find device "nvmf_init_br" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:04.971 Cannot find device "nvmf_init_br2" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:04.971 Cannot find device "nvmf_tgt_br" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.971 Cannot find device "nvmf_tgt_br2" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:04.971 Cannot find device "nvmf_init_br" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:04.971 Cannot find device "nvmf_init_br2" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:04.971 Cannot find device "nvmf_tgt_br" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:04.971 Cannot find device "nvmf_tgt_br2" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:04.971 Cannot find device "nvmf_br" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:04.971 Cannot find device "nvmf_init_if" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:04.971 Cannot find device "nvmf_init_if2" 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:09:04.971 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.230 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:05.230 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:05.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:05.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:05.231 00:09:05.231 --- 10.0.0.3 ping statistics --- 00:09:05.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.231 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:05.231 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:05.231 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:09:05.231 00:09:05.231 --- 10.0.0.4 ping statistics --- 00:09:05.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.231 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:05.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:09:05.231 00:09:05.231 --- 10.0.0.1 ping statistics --- 00:09:05.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.231 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:05.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:09:05.231 00:09:05.231 --- 10.0.0.2 ping statistics --- 00:09:05.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.231 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:05.231 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68806 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68806 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 68806 ']' 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.492 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.492 [2024-12-06 17:08:47.603657] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:09:05.492 [2024-12-06 17:08:47.603752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.492 [2024-12-06 17:08:47.746286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.492 [2024-12-06 17:08:47.778091] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.492 [2024-12-06 17:08:47.778149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.492 [2024-12-06 17:08:47.778160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.492 [2024-12-06 17:08:47.778168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.492 [2024-12-06 17:08:47.778175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.492 [2024-12-06 17:08:47.778486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.769 [2024-12-06 17:08:47.935953] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.769 [2024-12-06 17:08:47.956065] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:05.769 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.770 malloc0 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:05.770 { 00:09:05.770 "params": { 00:09:05.770 "name": "Nvme$subsystem", 00:09:05.770 "trtype": "$TEST_TRANSPORT", 00:09:05.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:05.770 "adrfam": "ipv4", 00:09:05.770 "trsvcid": "$NVMF_PORT", 00:09:05.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:05.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:05.770 "hdgst": ${hdgst:-false}, 00:09:05.770 "ddgst": ${ddgst:-false} 00:09:05.770 }, 00:09:05.770 "method": "bdev_nvme_attach_controller" 00:09:05.770 } 00:09:05.770 EOF 00:09:05.770 )") 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:05.770 17:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:05.770 "params": { 00:09:05.770 "name": "Nvme1", 00:09:05.770 "trtype": "tcp", 00:09:05.770 "traddr": "10.0.0.3", 00:09:05.770 "adrfam": "ipv4", 00:09:05.770 "trsvcid": "4420", 00:09:05.770 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:05.770 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:05.770 "hdgst": false, 00:09:05.770 "ddgst": false 00:09:05.770 }, 00:09:05.770 "method": "bdev_nvme_attach_controller" 00:09:05.770 }' 00:09:05.770 [2024-12-06 17:08:48.046789] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:09:05.770 [2024-12-06 17:08:48.046889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68848 ] 00:09:06.029 [2024-12-06 17:08:48.200690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.029 [2024-12-06 17:08:48.247498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.290 Running I/O for 10 seconds... 00:09:08.161 5856.00 IOPS, 45.75 MiB/s [2024-12-06T17:08:51.833Z] 5937.00 IOPS, 46.38 MiB/s [2024-12-06T17:08:52.765Z] 5958.33 IOPS, 46.55 MiB/s [2024-12-06T17:08:53.730Z] 5958.50 IOPS, 46.55 MiB/s [2024-12-06T17:08:54.663Z] 5962.40 IOPS, 46.58 MiB/s [2024-12-06T17:08:55.660Z] 5964.67 IOPS, 46.60 MiB/s [2024-12-06T17:08:56.595Z] 5971.29 IOPS, 46.65 MiB/s [2024-12-06T17:08:57.528Z] 5977.62 IOPS, 46.70 MiB/s [2024-12-06T17:08:58.461Z] 5979.89 IOPS, 46.72 MiB/s [2024-12-06T17:08:58.461Z] 5975.60 IOPS, 46.68 MiB/s 00:09:16.163 Latency(us) 00:09:16.163 [2024-12-06T17:08:58.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.163 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:16.163 Verification LBA range: start 0x0 length 0x1000 00:09:16.163 Nvme1n1 : 10.02 5978.59 46.71 0.00 0.00 21334.83 3351.27 35031.97 00:09:16.163 [2024-12-06T17:08:58.461Z] =================================================================================================================== 00:09:16.163 [2024-12-06T17:08:58.461Z] Total : 5978.59 46.71 0.00 0.00 21334.83 3351.27 35031.97 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68961 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:16.422 { 00:09:16.422 "params": { 00:09:16.422 "name": "Nvme$subsystem", 00:09:16.422 "trtype": "$TEST_TRANSPORT", 00:09:16.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:16.422 "adrfam": "ipv4", 00:09:16.422 "trsvcid": "$NVMF_PORT", 00:09:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:16.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:16.422 "hdgst": ${hdgst:-false}, 00:09:16.422 "ddgst": ${ddgst:-false} 00:09:16.422 }, 00:09:16.422 "method": "bdev_nvme_attach_controller" 00:09:16.422 } 00:09:16.422 EOF 00:09:16.422 )") 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:16.422 [2024-12-06 17:08:58.579046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.579109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:16.422 17:08:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:16.422 "params": { 00:09:16.422 "name": "Nvme1", 00:09:16.422 "trtype": "tcp", 00:09:16.422 "traddr": "10.0.0.3", 00:09:16.422 "adrfam": "ipv4", 00:09:16.422 "trsvcid": "4420", 00:09:16.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:16.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:16.422 "hdgst": false, 00:09:16.422 "ddgst": false 00:09:16.422 }, 00:09:16.422 "method": "bdev_nvme_attach_controller" 00:09:16.422 }' 00:09:16.422 [2024-12-06 17:08:58.591032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.591084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 [2024-12-06 17:08:58.603016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.603074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 [2024-12-06 17:08:58.615028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.615081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 [2024-12-06 17:08:58.627058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.627118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 [2024-12-06 17:08:58.637515] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:09:16.422 [2024-12-06 17:08:58.637614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68961 ] 00:09:16.422 [2024-12-06 17:08:58.639065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.639114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 [2024-12-06 17:08:58.651041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.651088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 [2024-12-06 17:08:58.663021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.663065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 [2024-12-06 17:08:58.675003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.675038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.422 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.422 [2024-12-06 17:08:58.686999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.422 [2024-12-06 17:08:58.687033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.423 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.423 [2024-12-06 17:08:58.699002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.423 [2024-12-06 17:08:58.699033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.423 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.423 [2024-12-06 17:08:58.711016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.423 [2024-12-06 17:08:58.711047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.423 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.723015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.723046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.735016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.735047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.747019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.747049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.759022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.759054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.771042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.771080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.783037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.783069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.795035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.795065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 [2024-12-06 17:08:58.795285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.807069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.807110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.819062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.819099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.831065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.831105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 [2024-12-06 17:08:58.834575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.843054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.843088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.855082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.855128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.867088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.867130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.879090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.879130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.891095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.891135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.903072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.903106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.915095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.915138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.927093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.927130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.939096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.939131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.951105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.951139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.963104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.963141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.682 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.682 [2024-12-06 17:08:58.975117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.682 [2024-12-06 17:08:58.975157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.940 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.940 Running I/O for 5 seconds... 00:09:16.940 [2024-12-06 17:08:58.987114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.940 [2024-12-06 17:08:58.987147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.940 2024/12/06 17:08:58 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.940 [2024-12-06 17:08:59.004067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.940 [2024-12-06 17:08:59.004114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.940 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.940 [2024-12-06 17:08:59.014066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.940 [2024-12-06 17:08:59.014106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.940 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.940 [2024-12-06 17:08:59.028648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.940 [2024-12-06 17:08:59.028690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.940 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.940 [2024-12-06 17:08:59.039906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.940 [2024-12-06 17:08:59.039946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.054560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.054601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.070132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.070178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.084580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.084624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.102423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.102468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.117602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.117647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.134876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.134924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.150658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.150708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.167897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.167942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.183359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.183416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.198965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.199012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.209891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.209936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:16.941 [2024-12-06 17:08:59.224419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:16.941 [2024-12-06 17:08:59.224464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:16.941 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.211 [2024-12-06 17:08:59.242140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.211 [2024-12-06 17:08:59.242190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.211 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.211 [2024-12-06 17:08:59.257640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.211 [2024-12-06 17:08:59.257687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.211 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.211 [2024-12-06 17:08:59.268040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.211 [2024-12-06 17:08:59.268081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.211 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.211 [2024-12-06 17:08:59.282539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.211 [2024-12-06 17:08:59.282584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.299190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.299245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.314556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.314613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.330298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.330356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.346020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.346078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.362598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.362659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.381726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.381791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.396667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.396726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.414057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.414124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.429599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.429649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.439841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.439882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.455364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.455420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.472213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.472256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.487898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.487945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.212 [2024-12-06 17:08:59.498376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.212 [2024-12-06 17:08:59.498416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.212 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.513139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.513182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.523847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.523888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.538571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.538614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.555209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.555255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.572417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.572462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.587481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.587529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.604508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.604564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.620072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.620117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.635555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.635605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.651778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.651838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.668855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.668917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.686162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.686224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.703160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.703218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.718276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.718330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.728634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.728688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.743996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.744048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.760897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.760963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.512 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.512 [2024-12-06 17:08:59.777108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.512 [2024-12-06 17:08:59.777166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.792890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.778 [2024-12-06 17:08:59.792947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.809199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.778 [2024-12-06 17:08:59.809255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.825708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.778 [2024-12-06 17:08:59.825758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.841034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.778 [2024-12-06 17:08:59.841081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.857095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.778 [2024-12-06 17:08:59.857141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.875306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.778 [2024-12-06 17:08:59.875352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.890249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.778 [2024-12-06 17:08:59.890305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.907432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.778 [2024-12-06 17:08:59.907478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.778 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.778 [2024-12-06 17:08:59.922459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:08:59.922504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 [2024-12-06 17:08:59.938835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:08:59.938880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 [2024-12-06 17:08:59.956030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:08:59.956078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 [2024-12-06 17:08:59.971600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:08:59.971658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 [2024-12-06 17:08:59.982189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:08:59.982231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:08:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 11458.00 IOPS, 89.52 MiB/s [2024-12-06T17:09:00.077Z] [2024-12-06 17:08:59.997417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:08:59.997460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 [2024-12-06 17:09:00.013935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:09:00.013984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 [2024-12-06 17:09:00.029645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:09:00.029698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 [2024-12-06 17:09:00.046992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:09:00.047044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:17.779 [2024-12-06 17:09:00.063118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:17.779 [2024-12-06 17:09:00.063164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:17.779 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.078923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.078983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.096137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.096197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.111370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.111427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.121705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.121755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.132663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.132720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.149514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.149586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.165402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.165463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.184121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.184188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.194686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.194749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.210449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.210509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.037 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.037 [2024-12-06 17:09:00.227374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.037 [2024-12-06 17:09:00.227437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.038 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.038 [2024-12-06 17:09:00.243790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.038 [2024-12-06 17:09:00.243864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.038 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.038 [2024-12-06 17:09:00.259664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.038 [2024-12-06 17:09:00.259727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.038 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.038 [2024-12-06 17:09:00.277557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.038 [2024-12-06 17:09:00.277619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.038 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.038 [2024-12-06 17:09:00.292641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.038 [2024-12-06 17:09:00.292697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.038 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.038 [2024-12-06 17:09:00.302938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.038 [2024-12-06 17:09:00.302989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.038 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.038 [2024-12-06 17:09:00.317719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.038 [2024-12-06 17:09:00.317778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.038 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.295 [2024-12-06 17:09:00.334813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.295 [2024-12-06 17:09:00.334870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.295 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.295 [2024-12-06 17:09:00.350456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.295 [2024-12-06 17:09:00.350503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.295 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.295 [2024-12-06 17:09:00.367875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.295 [2024-12-06 17:09:00.367946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.295 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.295 [2024-12-06 17:09:00.384194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.295 [2024-12-06 17:09:00.384243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.295 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.295 [2024-12-06 17:09:00.401366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.295 [2024-12-06 17:09:00.401406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.295 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.295 [2024-12-06 17:09:00.416634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.295 [2024-12-06 17:09:00.416672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.295 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.295 [2024-12-06 17:09:00.433491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.295 [2024-12-06 17:09:00.433530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.295 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.448790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.448831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.465365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.465434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.481422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.481468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.498948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.498989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.515228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.515279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.532029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.532076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.547735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.547775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.564790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.564827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.296 [2024-12-06 17:09:00.581168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.296 [2024-12-06 17:09:00.581207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.296 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.596669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.596708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.612376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.612418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.627874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.627918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.639191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.639327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.655175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.655325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.672126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.672241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.691426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.691543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.707251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.707372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.724771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.724907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.741290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.741421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.752237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.752374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.767471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.767587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.782976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.783045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.800558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.800626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.816168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.816249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.827308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.827376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.554 [2024-12-06 17:09:00.843125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.554 [2024-12-06 17:09:00.843189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.554 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.813 [2024-12-06 17:09:00.859129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-12-06 17:09:00.859190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.813 [2024-12-06 17:09:00.869707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.813 [2024-12-06 17:09:00.869764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.813 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.813 [2024-12-06 17:09:00.885889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:00.885944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:00.903289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:00.903413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:00.920335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:00.920468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:00.937629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:00.937776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:00.954370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:00.954503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:00.972173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:00.972333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 11329.50 IOPS, 88.51 MiB/s [2024-12-06T17:09:01.112Z] [2024-12-06 17:09:00.987158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:00.987284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:01.005074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:01.005214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:01.021352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:01.021497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:01.039135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:01.039284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:01.056167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:01.056323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:01.074541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:01.074681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:18.814 [2024-12-06 17:09:01.089701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:18.814 [2024-12-06 17:09:01.089797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:18.814 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.071 [2024-12-06 17:09:01.110882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.071 [2024-12-06 17:09:01.111021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.071 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.071 [2024-12-06 17:09:01.127315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.071 [2024-12-06 17:09:01.127381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.071 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.143098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.143166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.159342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.159409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.174849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.174913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.192930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.193002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.207404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.207466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.223970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.224041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.239814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.239868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.255948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.256010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.273098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.273145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.289031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.289077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.306785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.306832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.322404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.322452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.340413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.340458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.355396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.355433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.072 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.072 [2024-12-06 17:09:01.367200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.072 [2024-12-06 17:09:01.367237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.384510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.384549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.400062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.400100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.410023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.410059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.426019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.426057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.442643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.442682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.458226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.458275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.470552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.470589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.488184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.488221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.503233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.503282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.513249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.513296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.528603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.528642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.330 [2024-12-06 17:09:01.544071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.330 [2024-12-06 17:09:01.544109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.330 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.331 [2024-12-06 17:09:01.554432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.331 [2024-12-06 17:09:01.554468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.331 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.331 [2024-12-06 17:09:01.568872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.331 [2024-12-06 17:09:01.568908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.331 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.331 [2024-12-06 17:09:01.583964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.331 [2024-12-06 17:09:01.584001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.331 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.331 [2024-12-06 17:09:01.599585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.331 [2024-12-06 17:09:01.599622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.331 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.331 [2024-12-06 17:09:01.609592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.331 [2024-12-06 17:09:01.609629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.331 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.331 [2024-12-06 17:09:01.624561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.331 [2024-12-06 17:09:01.624598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.331 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.634786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.634824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.649331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.649371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.666074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.666112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.683480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.683515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.698833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.698869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.708747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.708782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.723644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.723680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.735993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.736030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.753314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.753350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.768623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.768660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.784964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.785016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.803249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.803298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.818034] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.818072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.834901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.834945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.850353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.850395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.862252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.862306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.589 [2024-12-06 17:09:01.880406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.589 [2024-12-06 17:09:01.880461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.589 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:01.897660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:01.897714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:01.915324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:01.915365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:01.930705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:01.930753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:01.948353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:01.948390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:01.963015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:01.963053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:01.979760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:01.979798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 11329.00 IOPS, 88.51 MiB/s [2024-12-06T17:09:02.146Z] [2024-12-06 17:09:01.995013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:01.995053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.010159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.010196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.026919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.026959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.042002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.042040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.057710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.057747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.068102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.068139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.083214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.083252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.100863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.100902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.115686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.115733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:19.848 [2024-12-06 17:09:02.132897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:19.848 [2024-12-06 17:09:02.132937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:19.848 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.148498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.148537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.159216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.159254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.174064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.174102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.189569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.189606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.205555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.205593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.220817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.220856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.236366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.236403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.246704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.246750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.107 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.107 [2024-12-06 17:09:02.260741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.107 [2024-12-06 17:09:02.260779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.277042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.277081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.295315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.295352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.310633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.310675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.320862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.320900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.334768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.334806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.350106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.350145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.366825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.366865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.382470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.382508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.108 [2024-12-06 17:09:02.394706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.108 [2024-12-06 17:09:02.394754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.108 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.412529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.412569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.427772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.427810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.445682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.445720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.460446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.460483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.476193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.476232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.494193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.494231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.509573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.509612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.520086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.520124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.534658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.534697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.544939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.544977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.559621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.559662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.575195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.575234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.585842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.585882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.600236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.600288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.616787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.616827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.365 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.365 [2024-12-06 17:09:02.633203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.365 [2024-12-06 17:09:02.633242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.366 [2024-12-06 17:09:02.650331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.366 [2024-12-06 17:09:02.650372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.366 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.665986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.666027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.683664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.683707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.699248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.699297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.716251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.716310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.731933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.731973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.742340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.742377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.756596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.756636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.768715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.768753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.786304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.786345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.801517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.801555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.817833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.817872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.834904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.834947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.851044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.851085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.866776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.866822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.877207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.877248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.891559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.891596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.907542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.907583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.623 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.623 [2024-12-06 17:09:02.917909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.623 [2024-12-06 17:09:02.917948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:02.932714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:02.932753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:02.943120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:02.943159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:02.957847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:02.957932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:02.971041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:02.971177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 11402.00 IOPS, 89.08 MiB/s [2024-12-06T17:09:03.182Z] [2024-12-06 17:09:02.987529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:02.987648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:03.005419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:03.005561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:03.021243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:03.021436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:03.034510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:03.034650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:03.051793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:03.051951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:03.068131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:03.068298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:03.083862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:03.084027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:03.108943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.884 [2024-12-06 17:09:03.109128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.884 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.884 [2024-12-06 17:09:03.125951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.885 [2024-12-06 17:09:03.126085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.885 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.885 [2024-12-06 17:09:03.143612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.885 [2024-12-06 17:09:03.143707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.885 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.885 [2024-12-06 17:09:03.161711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.885 [2024-12-06 17:09:03.161865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:20.885 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:20.885 [2024-12-06 17:09:03.179004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:20.885 [2024-12-06 17:09:03.179168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.196129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.196307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.212914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.213071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.224819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.224974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.240753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.240906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.257535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.257695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.274440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.274582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.290835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.290933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.302333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.302439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.317827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.317926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.334781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.334880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.351778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.351882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.367530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.367627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.385378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.385487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.400756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.400849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.417564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.417613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.144 [2024-12-06 17:09:03.433528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.144 [2024-12-06 17:09:03.433579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.144 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.403 [2024-12-06 17:09:03.449028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-06 17:09:03.449075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.403 [2024-12-06 17:09:03.464996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-06 17:09:03.465046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.403 [2024-12-06 17:09:03.481482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-06 17:09:03.481530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.403 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.403 [2024-12-06 17:09:03.491233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.403 [2024-12-06 17:09:03.491288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.507145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.507185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.523860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.523899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.541412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.541463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.556611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.556649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.572070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.572108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.582760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.582797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.597304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.597341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.607870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.607909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.622362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.622401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.632805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.632844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.647513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.647549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.659757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.659795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.678164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.678210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.404 [2024-12-06 17:09:03.693312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.404 [2024-12-06 17:09:03.693353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.404 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.663 [2024-12-06 17:09:03.708852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-06 17:09:03.708893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.663 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.663 [2024-12-06 17:09:03.723378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.663 [2024-12-06 17:09:03.723418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.739257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.739311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.756521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.756565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.772044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.772112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.788032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.788098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.806017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.806056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.821515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.821553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.831840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.831878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.846758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.846795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.856398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.856434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.872555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.872591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.889226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.889275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.906616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.906652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.922291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.922340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.938667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.938704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.664 [2024-12-06 17:09:03.954691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.664 [2024-12-06 17:09:03.954747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.664 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.923 [2024-12-06 17:09:03.971718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.923 [2024-12-06 17:09:03.971759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.923 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.923 11303.40 IOPS, 88.31 MiB/s [2024-12-06T17:09:04.221Z] [2024-12-06 17:09:03.987254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.923 [2024-12-06 17:09:03.987301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.923 2024/12/06 17:09:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.923 00:09:21.923 Latency(us) 00:09:21.923 [2024-12-06T17:09:04.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.923 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:21.923 Nvme1n1 : 5.01 11306.43 88.33 0.00 0.00 11307.14 4200.26 20852.36 00:09:21.923 [2024-12-06T17:09:04.221Z] =================================================================================================================== 00:09:21.923 [2024-12-06T17:09:04.221Z] Total : 11306.43 88.33 0.00 0.00 11307.14 4200.26 20852.36 00:09:21.923 [2024-12-06 17:09:03.997796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.923 [2024-12-06 17:09:03.997830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.923 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.923 [2024-12-06 17:09:04.009788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.923 [2024-12-06 17:09:04.009821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.923 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.923 [2024-12-06 17:09:04.021856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.923 [2024-12-06 17:09:04.021915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.923 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.923 [2024-12-06 17:09:04.033835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.923 [2024-12-06 17:09:04.033884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.923 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.923 [2024-12-06 17:09:04.045852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.923 [2024-12-06 17:09:04.045902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.923 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.923 [2024-12-06 17:09:04.057834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.923 [2024-12-06 17:09:04.057877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.924 [2024-12-06 17:09:04.069840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-06 17:09:04.069886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.924 [2024-12-06 17:09:04.081825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-06 17:09:04.081862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.924 [2024-12-06 17:09:04.093834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-06 17:09:04.093872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.924 [2024-12-06 17:09:04.105849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-06 17:09:04.105895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.924 [2024-12-06 17:09:04.117848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-06 17:09:04.117886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.924 [2024-12-06 17:09:04.129863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:21.924 [2024-12-06 17:09:04.129911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:21.924 2024/12/06 17:09:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:21.924 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68961) - No such process 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68961 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.924 delay0 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.924 17:09:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:09:22.182 [2024-12-06 17:09:04.341851] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:30.306 Initializing NVMe Controllers 00:09:30.306 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:09:30.306 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:30.306 Initialization complete. Launching workers. 00:09:30.306 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 264, failed: 17973 00:09:30.306 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18148, failed to submit 89 00:09:30.306 success 18034, unsuccessful 114, failed 0 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.306 rmmod nvme_tcp 00:09:30.306 rmmod nvme_fabrics 00:09:30.306 rmmod nvme_keyring 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68806 ']' 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68806 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 68806 ']' 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 68806 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68806 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:30.306 killing process with pid 68806 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68806' 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 68806 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 68806 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:09:30.306 00:09:30.306 real 0m24.963s 00:09:30.306 user 0m40.176s 00:09:30.306 sys 0m7.036s 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.306 ************************************ 00:09:30.306 END TEST nvmf_zcopy 00:09:30.306 ************************************ 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.306 ************************************ 00:09:30.306 START TEST nvmf_nmic 00:09:30.306 ************************************ 00:09:30.306 17:09:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:30.306 * Looking for test storage... 00:09:30.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.306 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.307 --rc genhtml_branch_coverage=1 00:09:30.307 --rc genhtml_function_coverage=1 00:09:30.307 --rc genhtml_legend=1 00:09:30.307 --rc geninfo_all_blocks=1 00:09:30.307 --rc geninfo_unexecuted_blocks=1 00:09:30.307 00:09:30.307 ' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.307 --rc genhtml_branch_coverage=1 00:09:30.307 --rc genhtml_function_coverage=1 00:09:30.307 --rc genhtml_legend=1 00:09:30.307 --rc geninfo_all_blocks=1 00:09:30.307 --rc geninfo_unexecuted_blocks=1 00:09:30.307 00:09:30.307 ' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.307 --rc genhtml_branch_coverage=1 00:09:30.307 --rc genhtml_function_coverage=1 00:09:30.307 --rc genhtml_legend=1 00:09:30.307 --rc geninfo_all_blocks=1 00:09:30.307 --rc geninfo_unexecuted_blocks=1 00:09:30.307 00:09:30.307 ' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.307 --rc genhtml_branch_coverage=1 00:09:30.307 --rc genhtml_function_coverage=1 00:09:30.307 --rc genhtml_legend=1 00:09:30.307 --rc geninfo_all_blocks=1 00:09:30.307 --rc geninfo_unexecuted_blocks=1 00:09:30.307 00:09:30.307 ' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.307 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:30.307 Cannot find device "nvmf_init_br" 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:30.307 Cannot find device "nvmf_init_br2" 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:09:30.307 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:30.307 Cannot find device "nvmf_tgt_br" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:30.308 Cannot find device "nvmf_tgt_br2" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:30.308 Cannot find device "nvmf_init_br" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:30.308 Cannot find device "nvmf_init_br2" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:30.308 Cannot find device "nvmf_tgt_br" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:30.308 Cannot find device "nvmf_tgt_br2" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:30.308 Cannot find device "nvmf_br" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:30.308 Cannot find device "nvmf_init_if" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:30.308 Cannot find device "nvmf_init_if2" 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:30.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:30.308 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:30.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:30.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:09:30.308 00:09:30.308 --- 10.0.0.3 ping statistics --- 00:09:30.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.308 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:30.308 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:30.308 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:09:30.308 00:09:30.308 --- 10.0.0.4 ping statistics --- 00:09:30.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.308 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:30.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:09:30.308 00:09:30.308 --- 10.0.0.1 ping statistics --- 00:09:30.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.308 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:30.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:09:30.308 00:09:30.308 --- 10.0.0.2 ping statistics --- 00:09:30.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.308 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.308 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=69348 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 69348 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 69348 ']' 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.567 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.567 [2024-12-06 17:09:12.662591] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:09:30.567 [2024-12-06 17:09:12.662722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.567 [2024-12-06 17:09:12.815255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.567 [2024-12-06 17:09:12.857804] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.567 [2024-12-06 17:09:12.857869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.567 [2024-12-06 17:09:12.857883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.567 [2024-12-06 17:09:12.857893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.567 [2024-12-06 17:09:12.857903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.567 [2024-12-06 17:09:12.858807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.567 [2024-12-06 17:09:12.858929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.567 [2024-12-06 17:09:12.858990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.567 [2024-12-06 17:09:12.858993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.826 17:09:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.826 [2024-12-06 17:09:12.998844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.826 Malloc0 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.826 [2024-12-06 17:09:13.053674] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:30.826 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.827 test case1: single bdev can't be used in multiple subsystems 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.827 [2024-12-06 17:09:13.081530] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:30.827 [2024-12-06 17:09:13.081575] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:30.827 [2024-12-06 17:09:13.081589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.827 2024/12/06 17:09:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:09:30.827 request: 00:09:30.827 { 00:09:30.827 "method": "nvmf_subsystem_add_ns", 00:09:30.827 "params": { 00:09:30.827 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:30.827 "namespace": { 00:09:30.827 "bdev_name": "Malloc0", 00:09:30.827 "no_auto_visible": false, 00:09:30.827 "hide_metadata": false 00:09:30.827 } 00:09:30.827 } 00:09:30.827 } 00:09:30.827 Got JSON-RPC error response 00:09:30.827 GoRPCClient: error on JSON-RPC call 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:30.827 Adding namespace failed - expected result. 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:30.827 test case2: host connect to nvmf target in multiple paths 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:30.827 [2024-12-06 17:09:13.093628] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.827 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:31.085 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:09:31.344 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.344 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:31.344 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.344 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:31.344 17:09:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:33.248 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:33.248 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.248 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:33.248 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:33.248 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.248 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:33.248 17:09:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:33.248 [global] 00:09:33.248 thread=1 00:09:33.248 invalidate=1 00:09:33.248 rw=write 00:09:33.248 time_based=1 00:09:33.248 runtime=1 00:09:33.248 ioengine=libaio 00:09:33.248 direct=1 00:09:33.248 bs=4096 00:09:33.248 iodepth=1 00:09:33.248 norandommap=0 00:09:33.248 numjobs=1 00:09:33.248 00:09:33.248 verify_dump=1 00:09:33.248 verify_backlog=512 00:09:33.248 verify_state_save=0 00:09:33.248 do_verify=1 00:09:33.248 verify=crc32c-intel 00:09:33.248 [job0] 00:09:33.248 filename=/dev/nvme0n1 00:09:33.248 Could not set queue depth (nvme0n1) 00:09:33.505 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.505 fio-3.35 00:09:33.505 Starting 1 thread 00:09:34.878 00:09:34.878 job0: (groupid=0, jobs=1): err= 0: pid=69444: Fri Dec 6 17:09:16 2024 00:09:34.878 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:34.878 slat (nsec): min=15598, max=49666, avg=20085.42, stdev=4412.29 00:09:34.878 clat (usec): min=134, max=1838, avg=155.16, stdev=33.19 00:09:34.878 lat (usec): min=151, max=1856, avg=175.25, stdev=33.69 00:09:34.878 clat percentiles (usec): 00:09:34.878 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:09:34.878 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:09:34.878 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 172], 00:09:34.878 | 99.00th=[ 186], 99.50th=[ 208], 99.90th=[ 363], 99.95th=[ 474], 00:09:34.878 | 99.99th=[ 1844] 00:09:34.878 write: IOPS=3113, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:09:34.878 slat (usec): min=21, max=131, avg=29.95, stdev= 8.16 00:09:34.878 clat (usec): min=89, max=1839, avg=113.72, stdev=34.54 00:09:34.878 lat (usec): min=118, max=1867, avg=143.67, stdev=36.16 00:09:34.878 clat percentiles (usec): 00:09:34.878 | 1.00th=[ 99], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 105], 00:09:34.878 | 30.00th=[ 108], 40.00th=[ 110], 50.00th=[ 112], 60.00th=[ 114], 00:09:34.878 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 124], 95.00th=[ 130], 00:09:34.878 | 99.00th=[ 145], 99.50th=[ 184], 99.90th=[ 367], 99.95th=[ 506], 00:09:34.878 | 99.99th=[ 1844] 00:09:34.878 bw ( KiB/s): min=12672, max=12672, per=100.00%, avg=12672.00, stdev= 0.00, samples=1 00:09:34.878 iops : min= 3168, max= 3168, avg=3168.00, stdev= 0.00, samples=1 00:09:34.878 lat (usec) : 100=1.29%, 250=98.47%, 500=0.19%, 750=0.02% 00:09:34.878 lat (msec) : 2=0.03% 00:09:34.878 cpu : usr=2.30%, sys=12.50%, ctx=6189, majf=0, minf=5 00:09:34.878 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.878 issued rwts: total=3072,3117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.878 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.878 00:09:34.878 Run status group 0 (all jobs): 00:09:34.878 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:09:34.878 WRITE: bw=12.2MiB/s (12.8MB/s), 12.2MiB/s-12.2MiB/s (12.8MB/s-12.8MB/s), io=12.2MiB (12.8MB), run=1001-1001msec 00:09:34.878 00:09:34.878 Disk stats (read/write): 00:09:34.878 nvme0n1: ios=2609/3063, merge=0/0, ticks=423/390, in_queue=813, util=91.37% 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.878 rmmod nvme_tcp 00:09:34.878 rmmod nvme_fabrics 00:09:34.878 rmmod nvme_keyring 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 69348 ']' 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 69348 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 69348 ']' 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 69348 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69348 00:09:34.878 killing process with pid 69348 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69348' 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 69348 00:09:34.878 17:09:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 69348 00:09:34.878 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:34.879 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:09:35.137 00:09:35.137 real 0m5.474s 00:09:35.137 user 0m16.805s 00:09:35.137 sys 0m1.444s 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.137 ************************************ 00:09:35.137 END TEST nvmf_nmic 00:09:35.137 ************************************ 00:09:35.137 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:35.395 17:09:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.395 17:09:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.395 17:09:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.395 17:09:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.395 ************************************ 00:09:35.395 START TEST nvmf_fio_target 00:09:35.395 ************************************ 00:09:35.395 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:35.395 * Looking for test storage... 00:09:35.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:35.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.396 --rc genhtml_branch_coverage=1 00:09:35.396 --rc genhtml_function_coverage=1 00:09:35.396 --rc genhtml_legend=1 00:09:35.396 --rc geninfo_all_blocks=1 00:09:35.396 --rc geninfo_unexecuted_blocks=1 00:09:35.396 00:09:35.396 ' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:35.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.396 --rc genhtml_branch_coverage=1 00:09:35.396 --rc genhtml_function_coverage=1 00:09:35.396 --rc genhtml_legend=1 00:09:35.396 --rc geninfo_all_blocks=1 00:09:35.396 --rc geninfo_unexecuted_blocks=1 00:09:35.396 00:09:35.396 ' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:35.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.396 --rc genhtml_branch_coverage=1 00:09:35.396 --rc genhtml_function_coverage=1 00:09:35.396 --rc genhtml_legend=1 00:09:35.396 --rc geninfo_all_blocks=1 00:09:35.396 --rc geninfo_unexecuted_blocks=1 00:09:35.396 00:09:35.396 ' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:35.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.396 --rc genhtml_branch_coverage=1 00:09:35.396 --rc genhtml_function_coverage=1 00:09:35.396 --rc genhtml_legend=1 00:09:35.396 --rc geninfo_all_blocks=1 00:09:35.396 --rc geninfo_unexecuted_blocks=1 00:09:35.396 00:09:35.396 ' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.396 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.396 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.397 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:35.677 Cannot find device "nvmf_init_br" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:35.677 Cannot find device "nvmf_init_br2" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:35.677 Cannot find device "nvmf_tgt_br" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:35.677 Cannot find device "nvmf_tgt_br2" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.677 Cannot find device "nvmf_init_br" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.677 Cannot find device "nvmf_init_br2" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.677 Cannot find device "nvmf_tgt_br" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.677 Cannot find device "nvmf_tgt_br2" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.677 Cannot find device "nvmf_br" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.677 Cannot find device "nvmf_init_if" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.677 Cannot find device "nvmf_init_if2" 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.677 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:35.677 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.935 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.935 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.935 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:35.935 17:09:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:35.935 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.935 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:09:35.935 00:09:35.935 --- 10.0.0.3 ping statistics --- 00:09:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.935 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:35.935 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:35.935 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:09:35.935 00:09:35.935 --- 10.0.0.4 ping statistics --- 00:09:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.935 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.935 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.935 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:09:35.935 00:09:35.935 --- 10.0.0.1 ping statistics --- 00:09:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.935 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:35.935 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.935 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:35.935 00:09:35.935 --- 10.0.0.2 ping statistics --- 00:09:35.935 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.935 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69681 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69681 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69681 ']' 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.935 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:35.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.936 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.936 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:35.936 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.936 [2024-12-06 17:09:18.211317] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:09:35.936 [2024-12-06 17:09:18.211452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.193 [2024-12-06 17:09:18.363638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.193 [2024-12-06 17:09:18.402797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:36.193 [2024-12-06 17:09:18.402853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:36.193 [2024-12-06 17:09:18.402866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:36.193 [2024-12-06 17:09:18.402877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:36.193 [2024-12-06 17:09:18.402885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:36.193 [2024-12-06 17:09:18.403768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.193 [2024-12-06 17:09:18.403826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.193 [2024-12-06 17:09:18.404637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.193 [2024-12-06 17:09:18.404678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.452 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:36.452 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:36.452 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:36.452 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:36.452 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.452 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:36.452 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:36.709 [2024-12-06 17:09:18.826148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:36.709 17:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:36.968 17:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:36.968 17:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.630 17:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:37.630 17:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:37.888 17:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:37.888 17:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.146 17:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:38.146 17:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:38.405 17:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:38.663 17:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:38.663 17:09:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.229 17:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:39.229 17:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.486 17:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:39.486 17:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:39.744 17:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:40.003 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:40.003 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.262 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:40.262 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:40.521 17:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:41.088 [2024-12-06 17:09:23.096407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:41.088 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:41.346 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:41.604 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:41.862 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:41.862 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:41.862 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.862 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:41.862 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:41.862 17:09:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:43.761 17:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:43.761 17:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:43.761 17:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:43.761 17:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:43.761 17:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:43.761 17:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:43.761 17:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:43.761 [global] 00:09:43.761 thread=1 00:09:43.761 invalidate=1 00:09:43.761 rw=write 00:09:43.761 time_based=1 00:09:43.761 runtime=1 00:09:43.761 ioengine=libaio 00:09:43.761 direct=1 00:09:43.761 bs=4096 00:09:43.761 iodepth=1 00:09:43.761 norandommap=0 00:09:43.761 numjobs=1 00:09:43.761 00:09:43.761 verify_dump=1 00:09:43.761 verify_backlog=512 00:09:43.761 verify_state_save=0 00:09:43.761 do_verify=1 00:09:43.761 verify=crc32c-intel 00:09:43.761 [job0] 00:09:43.761 filename=/dev/nvme0n1 00:09:43.761 [job1] 00:09:43.761 filename=/dev/nvme0n2 00:09:43.761 [job2] 00:09:43.761 filename=/dev/nvme0n3 00:09:43.761 [job3] 00:09:43.761 filename=/dev/nvme0n4 00:09:43.761 Could not set queue depth (nvme0n1) 00:09:43.761 Could not set queue depth (nvme0n2) 00:09:43.761 Could not set queue depth (nvme0n3) 00:09:43.761 Could not set queue depth (nvme0n4) 00:09:44.018 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.018 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.018 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.018 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:44.018 fio-3.35 00:09:44.018 Starting 4 threads 00:09:45.391 00:09:45.391 job0: (groupid=0, jobs=1): err= 0: pid=69972: Fri Dec 6 17:09:27 2024 00:09:45.391 read: IOPS=2511, BW=9.81MiB/s (10.3MB/s)(9.82MiB/1001msec) 00:09:45.391 slat (nsec): min=14188, max=61894, avg=20613.00, stdev=6547.16 00:09:45.391 clat (usec): min=141, max=510, avg=194.62, stdev=36.73 00:09:45.391 lat (usec): min=158, max=527, avg=215.24, stdev=39.36 00:09:45.391 clat percentiles (usec): 00:09:45.391 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 165], 00:09:45.391 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 192], 00:09:45.391 | 70.00th=[ 208], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 262], 00:09:45.391 | 99.00th=[ 302], 99.50th=[ 326], 99.90th=[ 396], 99.95th=[ 510], 00:09:45.391 | 99.99th=[ 510] 00:09:45.391 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:45.391 slat (nsec): min=20277, max=97001, avg=27642.96, stdev=8954.86 00:09:45.391 clat (usec): min=101, max=1953, avg=147.26, stdev=44.76 00:09:45.391 lat (usec): min=124, max=1978, avg=174.90, stdev=49.17 00:09:45.391 clat percentiles (usec): 00:09:45.391 | 1.00th=[ 112], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 125], 00:09:45.391 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 139], 60.00th=[ 145], 00:09:45.391 | 70.00th=[ 155], 80.00th=[ 172], 90.00th=[ 188], 95.00th=[ 200], 00:09:45.391 | 99.00th=[ 225], 99.50th=[ 233], 99.90th=[ 289], 99.95th=[ 293], 00:09:45.391 | 99.99th=[ 1958] 00:09:45.391 bw ( KiB/s): min=10208, max=10208, per=28.63%, avg=10208.00, stdev= 0.00, samples=1 00:09:45.391 iops : min= 2552, max= 2552, avg=2552.00, stdev= 0.00, samples=1 00:09:45.391 lat (usec) : 250=96.02%, 500=3.92%, 750=0.04% 00:09:45.391 lat (msec) : 2=0.02% 00:09:45.391 cpu : usr=2.60%, sys=9.20%, ctx=5090, majf=0, minf=9 00:09:45.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.391 issued rwts: total=2514,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.391 job1: (groupid=0, jobs=1): err= 0: pid=69973: Fri Dec 6 17:09:27 2024 00:09:45.391 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:45.391 slat (nsec): min=13457, max=48589, avg=17529.97, stdev=4507.13 00:09:45.391 clat (usec): min=136, max=5934, avg=185.57, stdev=204.08 00:09:45.391 lat (usec): min=151, max=5949, avg=203.10, stdev=204.41 00:09:45.391 clat percentiles (usec): 00:09:45.391 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:09:45.391 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 167], 60.00th=[ 172], 00:09:45.391 | 70.00th=[ 180], 80.00th=[ 196], 90.00th=[ 221], 95.00th=[ 233], 00:09:45.391 | 99.00th=[ 260], 99.50th=[ 420], 99.90th=[ 4555], 99.95th=[ 4621], 00:09:45.391 | 99.99th=[ 5932] 00:09:45.391 write: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:09:45.391 slat (nsec): min=16517, max=97346, avg=26175.80, stdev=8577.96 00:09:45.391 clat (usec): min=101, max=328, avg=136.35, stdev=21.05 00:09:45.391 lat (usec): min=121, max=411, avg=162.52, stdev=22.61 00:09:45.391 clat percentiles (usec): 00:09:45.391 | 1.00th=[ 106], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 119], 00:09:45.391 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 133], 60.00th=[ 137], 00:09:45.391 | 70.00th=[ 145], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 174], 00:09:45.391 | 99.00th=[ 192], 99.50th=[ 198], 99.90th=[ 314], 99.95th=[ 326], 00:09:45.391 | 99.99th=[ 330] 00:09:45.391 bw ( KiB/s): min=12288, max=12288, per=34.47%, avg=12288.00, stdev= 0.00, samples=1 00:09:45.392 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:45.392 lat (usec) : 250=99.21%, 500=0.59%, 750=0.06% 00:09:45.392 lat (msec) : 2=0.04%, 4=0.04%, 10=0.07% 00:09:45.392 cpu : usr=2.00%, sys=9.50%, ctx=5449, majf=0, minf=3 00:09:45.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.392 issued rwts: total=2560,2889,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.392 job2: (groupid=0, jobs=1): err= 0: pid=69974: Fri Dec 6 17:09:27 2024 00:09:45.392 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:45.392 slat (nsec): min=15599, max=74974, avg=28060.38, stdev=5896.51 00:09:45.392 clat (usec): min=182, max=762, avg=304.64, stdev=42.08 00:09:45.392 lat (usec): min=212, max=789, avg=332.70, stdev=42.17 00:09:45.392 clat percentiles (usec): 00:09:45.392 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 00:09:45.392 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:09:45.392 | 70.00th=[ 310], 80.00th=[ 318], 90.00th=[ 338], 95.00th=[ 408], 00:09:45.392 | 99.00th=[ 461], 99.50th=[ 482], 99.90th=[ 578], 99.95th=[ 766], 00:09:45.392 | 99.99th=[ 766] 00:09:45.392 write: IOPS=1768, BW=7073KiB/s (7243kB/s)(7080KiB/1001msec); 0 zone resets 00:09:45.392 slat (usec): min=23, max=131, avg=37.29, stdev= 9.64 00:09:45.392 clat (usec): min=122, max=692, avg=233.42, stdev=45.52 00:09:45.392 lat (usec): min=149, max=729, avg=270.71, stdev=45.72 00:09:45.392 clat percentiles (usec): 00:09:45.392 | 1.00th=[ 137], 5.00th=[ 178], 10.00th=[ 194], 20.00th=[ 204], 00:09:45.392 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:09:45.392 | 70.00th=[ 239], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 322], 00:09:45.392 | 99.00th=[ 355], 99.50th=[ 371], 99.90th=[ 635], 99.95th=[ 693], 00:09:45.392 | 99.99th=[ 693] 00:09:45.392 bw ( KiB/s): min= 8192, max= 8192, per=22.98%, avg=8192.00, stdev= 0.00, samples=1 00:09:45.392 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:45.392 lat (usec) : 250=40.99%, 500=58.86%, 750=0.12%, 1000=0.03% 00:09:45.392 cpu : usr=3.40%, sys=7.00%, ctx=3307, majf=0, minf=5 00:09:45.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.392 issued rwts: total=1536,1770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.392 job3: (groupid=0, jobs=1): err= 0: pid=69975: Fri Dec 6 17:09:27 2024 00:09:45.392 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:45.392 slat (nsec): min=14850, max=71771, avg=27108.97, stdev=5366.52 00:09:45.392 clat (usec): min=172, max=664, avg=312.49, stdev=50.85 00:09:45.392 lat (usec): min=213, max=691, avg=339.60, stdev=51.56 00:09:45.392 clat percentiles (usec): 00:09:45.392 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 285], 00:09:45.392 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:09:45.392 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 379], 95.00th=[ 433], 00:09:45.392 | 99.00th=[ 519], 99.50th=[ 586], 99.90th=[ 627], 99.95th=[ 668], 00:09:45.392 | 99.99th=[ 668] 00:09:45.392 write: IOPS=1701, BW=6805KiB/s (6969kB/s)(6812KiB/1001msec); 0 zone resets 00:09:45.392 slat (usec): min=22, max=101, avg=37.38, stdev= 8.93 00:09:45.392 clat (usec): min=123, max=2657, avg=237.90, stdev=74.78 00:09:45.392 lat (usec): min=176, max=2704, avg=275.28, stdev=75.39 00:09:45.392 clat percentiles (usec): 00:09:45.392 | 1.00th=[ 178], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 206], 00:09:45.392 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:09:45.392 | 70.00th=[ 241], 80.00th=[ 265], 90.00th=[ 297], 95.00th=[ 326], 00:09:45.392 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 1123], 99.95th=[ 2671], 00:09:45.392 | 99.99th=[ 2671] 00:09:45.392 bw ( KiB/s): min= 8192, max= 8192, per=22.98%, avg=8192.00, stdev= 0.00, samples=1 00:09:45.392 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:45.392 lat (usec) : 250=39.39%, 500=59.93%, 750=0.62% 00:09:45.392 lat (msec) : 2=0.03%, 4=0.03% 00:09:45.392 cpu : usr=2.00%, sys=8.10%, ctx=3240, majf=0, minf=21 00:09:45.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:45.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.392 issued rwts: total=1536,1703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:45.392 00:09:45.392 Run status group 0 (all jobs): 00:09:45.392 READ: bw=31.8MiB/s (33.3MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=31.8MiB (33.4MB), run=1001-1001msec 00:09:45.392 WRITE: bw=34.8MiB/s (36.5MB/s), 6805KiB/s-11.3MiB/s (6969kB/s-11.8MB/s), io=34.9MiB (36.5MB), run=1001-1001msec 00:09:45.392 00:09:45.392 Disk stats (read/write): 00:09:45.392 nvme0n1: ios=2098/2266, merge=0/0, ticks=454/361, in_queue=815, util=89.18% 00:09:45.392 nvme0n2: ios=2246/2560, merge=0/0, ticks=442/373, in_queue=815, util=89.77% 00:09:45.392 nvme0n3: ios=1329/1536, merge=0/0, ticks=427/358, in_queue=785, util=89.07% 00:09:45.392 nvme0n4: ios=1290/1536, merge=0/0, ticks=417/384, in_queue=801, util=89.72% 00:09:45.392 17:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:45.392 [global] 00:09:45.392 thread=1 00:09:45.392 invalidate=1 00:09:45.392 rw=randwrite 00:09:45.392 time_based=1 00:09:45.392 runtime=1 00:09:45.392 ioengine=libaio 00:09:45.392 direct=1 00:09:45.392 bs=4096 00:09:45.392 iodepth=1 00:09:45.392 norandommap=0 00:09:45.392 numjobs=1 00:09:45.392 00:09:45.392 verify_dump=1 00:09:45.392 verify_backlog=512 00:09:45.392 verify_state_save=0 00:09:45.392 do_verify=1 00:09:45.392 verify=crc32c-intel 00:09:45.392 [job0] 00:09:45.392 filename=/dev/nvme0n1 00:09:45.392 [job1] 00:09:45.392 filename=/dev/nvme0n2 00:09:45.392 [job2] 00:09:45.392 filename=/dev/nvme0n3 00:09:45.392 [job3] 00:09:45.392 filename=/dev/nvme0n4 00:09:45.392 Could not set queue depth (nvme0n1) 00:09:45.392 Could not set queue depth (nvme0n2) 00:09:45.392 Could not set queue depth (nvme0n3) 00:09:45.392 Could not set queue depth (nvme0n4) 00:09:45.392 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.392 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.392 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.392 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:45.392 fio-3.35 00:09:45.392 Starting 4 threads 00:09:46.768 00:09:46.768 job0: (groupid=0, jobs=1): err= 0: pid=70028: Fri Dec 6 17:09:28 2024 00:09:46.768 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:46.768 slat (nsec): min=9176, max=67540, avg=19738.85, stdev=6253.69 00:09:46.768 clat (usec): min=214, max=7347, avg=292.56, stdev=288.51 00:09:46.768 lat (usec): min=238, max=7380, avg=312.30, stdev=289.06 00:09:46.768 clat percentiles (usec): 00:09:46.768 | 1.00th=[ 231], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 255], 00:09:46.768 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:09:46.768 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 351], 00:09:46.768 | 99.00th=[ 486], 99.50th=[ 873], 99.90th=[ 7111], 99.95th=[ 7373], 00:09:46.768 | 99.99th=[ 7373] 00:09:46.768 write: IOPS=2022, BW=8092KiB/s (8286kB/s)(8100KiB/1001msec); 0 zone resets 00:09:46.768 slat (nsec): min=11918, max=82128, avg=28763.76, stdev=9299.72 00:09:46.768 clat (usec): min=112, max=452, avg=224.30, stdev=40.15 00:09:46.768 lat (usec): min=130, max=468, avg=253.07, stdev=42.86 00:09:46.768 clat percentiles (usec): 00:09:46.768 | 1.00th=[ 129], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:09:46.768 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 225], 00:09:46.768 | 70.00th=[ 241], 80.00th=[ 260], 90.00th=[ 281], 95.00th=[ 293], 00:09:46.768 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 408], 99.95th=[ 445], 00:09:46.768 | 99.99th=[ 453] 00:09:46.768 bw ( KiB/s): min= 8192, max= 8192, per=21.12%, avg=8192.00, stdev= 0.00, samples=1 00:09:46.768 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:46.768 lat (usec) : 250=48.13%, 500=51.47%, 750=0.17%, 1000=0.03% 00:09:46.768 lat (msec) : 2=0.03%, 4=0.11%, 10=0.06% 00:09:46.768 cpu : usr=1.70%, sys=6.80%, ctx=3562, majf=0, minf=17 00:09:46.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.768 issued rwts: total=1536,2025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.768 job1: (groupid=0, jobs=1): err= 0: pid=70029: Fri Dec 6 17:09:28 2024 00:09:46.768 read: IOPS=1625, BW=6501KiB/s (6658kB/s)(6508KiB/1001msec) 00:09:46.768 slat (nsec): min=11077, max=65453, avg=19535.04, stdev=7680.65 00:09:46.768 clat (usec): min=148, max=887, avg=271.95, stdev=52.34 00:09:46.768 lat (usec): min=170, max=923, avg=291.49, stdev=51.39 00:09:46.768 clat percentiles (usec): 00:09:46.768 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 239], 20.00th=[ 253], 00:09:46.768 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:09:46.768 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 322], 95.00th=[ 363], 00:09:46.768 | 99.00th=[ 449], 99.50th=[ 494], 99.90th=[ 693], 99.95th=[ 889], 00:09:46.768 | 99.99th=[ 889] 00:09:46.768 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:46.768 slat (usec): min=11, max=111, avg=28.33, stdev= 8.83 00:09:46.768 clat (usec): min=112, max=475, avg=224.27, stdev=39.64 00:09:46.768 lat (usec): min=135, max=503, avg=252.61, stdev=41.83 00:09:46.768 clat percentiles (usec): 00:09:46.768 | 1.00th=[ 128], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 196], 00:09:46.768 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 225], 00:09:46.768 | 70.00th=[ 241], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 297], 00:09:46.768 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 437], 99.95th=[ 457], 00:09:46.768 | 99.99th=[ 478] 00:09:46.768 bw ( KiB/s): min= 8192, max= 8192, per=21.12%, avg=8192.00, stdev= 0.00, samples=1 00:09:46.768 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:46.768 lat (usec) : 250=49.82%, 500=50.01%, 750=0.14%, 1000=0.03% 00:09:46.768 cpu : usr=1.10%, sys=7.80%, ctx=3675, majf=0, minf=7 00:09:46.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.768 issued rwts: total=1627,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.768 job2: (groupid=0, jobs=1): err= 0: pid=70030: Fri Dec 6 17:09:28 2024 00:09:46.768 read: IOPS=2367, BW=9471KiB/s (9698kB/s)(9480KiB/1001msec) 00:09:46.768 slat (nsec): min=14383, max=55324, avg=19019.22, stdev=5460.59 00:09:46.768 clat (usec): min=157, max=739, avg=200.20, stdev=31.67 00:09:46.768 lat (usec): min=174, max=762, avg=219.22, stdev=32.45 00:09:46.768 clat percentiles (usec): 00:09:46.768 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:09:46.768 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:09:46.768 | 70.00th=[ 204], 80.00th=[ 221], 90.00th=[ 245], 95.00th=[ 262], 00:09:46.768 | 99.00th=[ 285], 99.50th=[ 297], 99.90th=[ 461], 99.95th=[ 578], 00:09:46.768 | 99.99th=[ 742] 00:09:46.768 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:46.768 slat (nsec): min=20575, max=80414, avg=27807.48, stdev=7726.29 00:09:46.768 clat (usec): min=115, max=2616, avg=155.73, stdev=53.61 00:09:46.768 lat (usec): min=138, max=2638, avg=183.54, stdev=54.26 00:09:46.768 clat percentiles (usec): 00:09:46.768 | 1.00th=[ 124], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:09:46.768 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:09:46.768 | 70.00th=[ 161], 80.00th=[ 172], 90.00th=[ 188], 95.00th=[ 200], 00:09:46.768 | 99.00th=[ 221], 99.50th=[ 227], 99.90th=[ 277], 99.95th=[ 347], 00:09:46.768 | 99.99th=[ 2606] 00:09:46.768 bw ( KiB/s): min=11960, max=11960, per=30.84%, avg=11960.00, stdev= 0.00, samples=1 00:09:46.768 iops : min= 2990, max= 2990, avg=2990.00, stdev= 0.00, samples=1 00:09:46.768 lat (usec) : 250=95.84%, 500=4.10%, 750=0.04% 00:09:46.768 lat (msec) : 4=0.02% 00:09:46.768 cpu : usr=2.00%, sys=9.20%, ctx=4934, majf=0, minf=10 00:09:46.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.768 issued rwts: total=2370,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.768 job3: (groupid=0, jobs=1): err= 0: pid=70031: Fri Dec 6 17:09:28 2024 00:09:46.768 read: IOPS=2566, BW=10.0MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:46.768 slat (nsec): min=13838, max=44966, avg=17436.43, stdev=3234.30 00:09:46.768 clat (usec): min=147, max=284, avg=173.85, stdev=11.76 00:09:46.768 lat (usec): min=163, max=302, avg=191.29, stdev=12.07 00:09:46.768 clat percentiles (usec): 00:09:46.768 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:09:46.768 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:09:46.768 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 194], 00:09:46.768 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 237], 99.95th=[ 258], 00:09:46.768 | 99.99th=[ 285] 00:09:46.768 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:46.768 slat (nsec): min=19791, max=80890, avg=24845.40, stdev=5307.43 00:09:46.768 clat (usec): min=109, max=649, avg=136.98, stdev=18.98 00:09:46.768 lat (usec): min=132, max=690, avg=161.83, stdev=20.17 00:09:46.768 clat percentiles (usec): 00:09:46.768 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:09:46.768 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:09:46.768 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 163], 00:09:46.768 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 273], 99.95th=[ 594], 00:09:46.768 | 99.99th=[ 652] 00:09:46.768 bw ( KiB/s): min=12288, max=12288, per=31.69%, avg=12288.00, stdev= 0.00, samples=1 00:09:46.768 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:46.768 lat (usec) : 250=99.89%, 500=0.07%, 750=0.04% 00:09:46.768 cpu : usr=2.60%, sys=9.00%, ctx=5645, majf=0, minf=15 00:09:46.768 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:46.768 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.768 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.768 issued rwts: total=2569,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.768 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:46.768 00:09:46.768 Run status group 0 (all jobs): 00:09:46.768 READ: bw=31.6MiB/s (33.2MB/s), 6138KiB/s-10.0MiB/s (6285kB/s-10.5MB/s), io=31.6MiB (33.2MB), run=1001-1001msec 00:09:46.768 WRITE: bw=37.9MiB/s (39.7MB/s), 8092KiB/s-12.0MiB/s (8286kB/s-12.6MB/s), io=37.9MiB (39.8MB), run=1001-1001msec 00:09:46.768 00:09:46.768 Disk stats (read/write): 00:09:46.768 nvme0n1: ios=1487/1536, merge=0/0, ticks=450/357, in_queue=807, util=87.37% 00:09:46.768 nvme0n2: ios=1584/1547, merge=0/0, ticks=436/366, in_queue=802, util=87.97% 00:09:46.768 nvme0n3: ios=2048/2233, merge=0/0, ticks=416/372, in_queue=788, util=88.77% 00:09:46.768 nvme0n4: ios=2229/2560, merge=0/0, ticks=404/376, in_queue=780, util=89.71% 00:09:46.768 17:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:46.768 [global] 00:09:46.768 thread=1 00:09:46.768 invalidate=1 00:09:46.768 rw=write 00:09:46.768 time_based=1 00:09:46.768 runtime=1 00:09:46.768 ioengine=libaio 00:09:46.768 direct=1 00:09:46.768 bs=4096 00:09:46.768 iodepth=128 00:09:46.768 norandommap=0 00:09:46.768 numjobs=1 00:09:46.768 00:09:46.768 verify_dump=1 00:09:46.768 verify_backlog=512 00:09:46.768 verify_state_save=0 00:09:46.768 do_verify=1 00:09:46.768 verify=crc32c-intel 00:09:46.768 [job0] 00:09:46.768 filename=/dev/nvme0n1 00:09:46.768 [job1] 00:09:46.768 filename=/dev/nvme0n2 00:09:46.768 [job2] 00:09:46.768 filename=/dev/nvme0n3 00:09:46.768 [job3] 00:09:46.768 filename=/dev/nvme0n4 00:09:46.768 Could not set queue depth (nvme0n1) 00:09:46.768 Could not set queue depth (nvme0n2) 00:09:46.768 Could not set queue depth (nvme0n3) 00:09:46.768 Could not set queue depth (nvme0n4) 00:09:46.768 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.768 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.768 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.768 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:46.768 fio-3.35 00:09:46.768 Starting 4 threads 00:09:48.143 00:09:48.143 job0: (groupid=0, jobs=1): err= 0: pid=70092: Fri Dec 6 17:09:30 2024 00:09:48.143 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:09:48.143 slat (usec): min=5, max=5060, avg=104.18, stdev=492.47 00:09:48.143 clat (usec): min=8114, max=20916, avg=13839.64, stdev=1682.47 00:09:48.143 lat (usec): min=8141, max=20948, avg=13943.82, stdev=1734.90 00:09:48.143 clat percentiles (usec): 00:09:48.143 | 1.00th=[ 9896], 5.00th=[11469], 10.00th=[11994], 20.00th=[12387], 00:09:48.143 | 30.00th=[12780], 40.00th=[13304], 50.00th=[13829], 60.00th=[14353], 00:09:48.143 | 70.00th=[14615], 80.00th=[14877], 90.00th=[16057], 95.00th=[16712], 00:09:48.143 | 99.00th=[18482], 99.50th=[19268], 99.90th=[20055], 99.95th=[20055], 00:09:48.143 | 99.99th=[20841] 00:09:48.143 write: IOPS=4912, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1001msec); 0 zone resets 00:09:48.143 slat (usec): min=12, max=5645, avg=97.64, stdev=503.78 00:09:48.143 clat (usec): min=617, max=20782, avg=12779.61, stdev=1943.11 00:09:48.143 lat (usec): min=652, max=20827, avg=12877.25, stdev=1999.74 00:09:48.143 clat percentiles (usec): 00:09:48.143 | 1.00th=[ 5997], 5.00th=[10552], 10.00th=[11338], 20.00th=[11600], 00:09:48.143 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12911], 00:09:48.143 | 70.00th=[13304], 80.00th=[13698], 90.00th=[15401], 95.00th=[16188], 00:09:48.143 | 99.00th=[19006], 99.50th=[19268], 99.90th=[20841], 99.95th=[20841], 00:09:48.143 | 99.99th=[20841] 00:09:48.143 bw ( KiB/s): min=18088, max=20272, per=25.76%, avg=19180.00, stdev=1544.32, samples=2 00:09:48.143 iops : min= 4522, max= 5068, avg=4795.00, stdev=386.08, samples=2 00:09:48.143 lat (usec) : 750=0.02% 00:09:48.143 lat (msec) : 10=2.19%, 20=97.67%, 50=0.12% 00:09:48.143 cpu : usr=5.00%, sys=13.70%, ctx=448, majf=0, minf=1 00:09:48.143 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:48.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.143 issued rwts: total=4608,4917,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.144 job1: (groupid=0, jobs=1): err= 0: pid=70093: Fri Dec 6 17:09:30 2024 00:09:48.144 read: IOPS=4602, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:09:48.144 slat (usec): min=4, max=3431, avg=105.61, stdev=486.47 00:09:48.144 clat (usec): min=624, max=16320, avg=13950.80, stdev=1331.31 00:09:48.144 lat (usec): min=3701, max=17802, avg=14056.41, stdev=1257.60 00:09:48.144 clat percentiles (usec): 00:09:48.144 | 1.00th=[ 7570], 5.00th=[11731], 10.00th=[12780], 20.00th=[13698], 00:09:48.144 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14353], 00:09:48.144 | 70.00th=[14484], 80.00th=[14615], 90.00th=[14877], 95.00th=[15139], 00:09:48.144 | 99.00th=[15795], 99.50th=[15926], 99.90th=[16319], 99.95th=[16319], 00:09:48.144 | 99.99th=[16319] 00:09:48.144 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:09:48.144 slat (usec): min=10, max=3941, avg=102.90, stdev=444.74 00:09:48.144 clat (usec): min=10325, max=16400, avg=13478.95, stdev=1245.17 00:09:48.144 lat (usec): min=10770, max=16558, avg=13581.85, stdev=1220.07 00:09:48.144 clat percentiles (usec): 00:09:48.144 | 1.00th=[10945], 5.00th=[11469], 10.00th=[11731], 20.00th=[12125], 00:09:48.144 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13698], 60.00th=[14091], 00:09:48.144 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15008], 95.00th=[15270], 00:09:48.144 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16319], 99.95th=[16450], 00:09:48.144 | 99.99th=[16450] 00:09:48.144 bw ( KiB/s): min=17144, max=19759, per=24.78%, avg=18451.50, stdev=1849.08, samples=2 00:09:48.144 iops : min= 4286, max= 4939, avg=4612.50, stdev=461.74, samples=2 00:09:48.144 lat (usec) : 750=0.01% 00:09:48.144 lat (msec) : 4=0.14%, 10=0.67%, 20=99.18% 00:09:48.144 cpu : usr=4.40%, sys=13.80%, ctx=430, majf=0, minf=4 00:09:48.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:48.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.144 issued rwts: total=4607,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.144 job2: (groupid=0, jobs=1): err= 0: pid=70094: Fri Dec 6 17:09:30 2024 00:09:48.144 read: IOPS=4215, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1002msec) 00:09:48.144 slat (usec): min=5, max=3731, avg=113.39, stdev=482.73 00:09:48.144 clat (usec): min=524, max=18633, avg=14390.28, stdev=1534.23 00:09:48.144 lat (usec): min=3811, max=19735, avg=14503.67, stdev=1501.34 00:09:48.144 clat percentiles (usec): 00:09:48.144 | 1.00th=[ 7635], 5.00th=[12387], 10.00th=[13042], 20.00th=[13698], 00:09:48.144 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:09:48.144 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15795], 95.00th=[16188], 00:09:48.144 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18744], 99.95th=[18744], 00:09:48.144 | 99.99th=[18744] 00:09:48.144 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:09:48.144 slat (usec): min=14, max=3548, avg=106.41, stdev=438.79 00:09:48.144 clat (usec): min=11163, max=19425, avg=14240.62, stdev=1463.54 00:09:48.144 lat (usec): min=11184, max=19447, avg=14347.02, stdev=1448.64 00:09:48.144 clat percentiles (usec): 00:09:48.144 | 1.00th=[11731], 5.00th=[11994], 10.00th=[12256], 20.00th=[12518], 00:09:48.144 | 30.00th=[13173], 40.00th=[14222], 50.00th=[14484], 60.00th=[14877], 00:09:48.144 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15664], 95.00th=[16450], 00:09:48.144 | 99.00th=[17957], 99.50th=[19268], 99.90th=[19268], 99.95th=[19530], 00:09:48.144 | 99.99th=[19530] 00:09:48.144 bw ( KiB/s): min=17875, max=19024, per=24.78%, avg=18449.50, stdev=812.47, samples=2 00:09:48.144 iops : min= 4468, max= 4756, avg=4612.00, stdev=203.65, samples=2 00:09:48.144 lat (usec) : 750=0.01% 00:09:48.144 lat (msec) : 4=0.15%, 10=0.58%, 20=99.26% 00:09:48.144 cpu : usr=3.40%, sys=12.39%, ctx=546, majf=0, minf=3 00:09:48.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:48.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.144 issued rwts: total=4224,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.144 job3: (groupid=0, jobs=1): err= 0: pid=70095: Fri Dec 6 17:09:30 2024 00:09:48.144 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:09:48.144 slat (usec): min=4, max=5118, avg=113.87, stdev=557.09 00:09:48.144 clat (usec): min=11006, max=20784, avg=15140.62, stdev=1296.73 00:09:48.144 lat (usec): min=11043, max=20815, avg=15254.49, stdev=1355.30 00:09:48.144 clat percentiles (usec): 00:09:48.144 | 1.00th=[11731], 5.00th=[12780], 10.00th=[13566], 20.00th=[14353], 00:09:48.144 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:09:48.144 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16450], 95.00th=[17171], 00:09:48.144 | 99.00th=[18744], 99.50th=[19268], 99.90th=[20317], 99.95th=[20317], 00:09:48.144 | 99.99th=[20841] 00:09:48.144 write: IOPS=4522, BW=17.7MiB/s (18.5MB/s)(17.7MiB/1003msec); 0 zone resets 00:09:48.144 slat (usec): min=11, max=4529, avg=110.75, stdev=558.08 00:09:48.144 clat (usec): min=307, max=19231, avg=14229.90, stdev=1601.52 00:09:48.144 lat (usec): min=4325, max=19264, avg=14340.65, stdev=1616.51 00:09:48.144 clat percentiles (usec): 00:09:48.144 | 1.00th=[ 9110], 5.00th=[11207], 10.00th=[12387], 20.00th=[13829], 00:09:48.144 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:09:48.144 | 70.00th=[14877], 80.00th=[15139], 90.00th=[15533], 95.00th=[16057], 00:09:48.144 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:09:48.144 | 99.99th=[19268] 00:09:48.144 bw ( KiB/s): min=17160, max=18104, per=23.68%, avg=17632.00, stdev=667.51, samples=2 00:09:48.144 iops : min= 4290, max= 4526, avg=4408.00, stdev=166.88, samples=2 00:09:48.144 lat (usec) : 500=0.01% 00:09:48.144 lat (msec) : 10=0.90%, 20=98.99%, 50=0.09% 00:09:48.144 cpu : usr=4.99%, sys=10.68%, ctx=309, majf=0, minf=3 00:09:48.144 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:48.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.144 issued rwts: total=4096,4536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.144 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.144 00:09:48.144 Run status group 0 (all jobs): 00:09:48.144 READ: bw=68.3MiB/s (71.6MB/s), 16.0MiB/s-18.0MiB/s (16.7MB/s-18.9MB/s), io=68.5MiB (71.8MB), run=1001-1003msec 00:09:48.144 WRITE: bw=72.7MiB/s (76.2MB/s), 17.7MiB/s-19.2MiB/s (18.5MB/s-20.1MB/s), io=72.9MiB (76.5MB), run=1001-1003msec 00:09:48.144 00:09:48.144 Disk stats (read/write): 00:09:48.144 nvme0n1: ios=4075/4096, merge=0/0, ticks=26916/22873, in_queue=49789, util=88.97% 00:09:48.144 nvme0n2: ios=3933/4096, merge=0/0, ticks=12632/11710, in_queue=24342, util=89.39% 00:09:48.144 nvme0n3: ios=3584/4077, merge=0/0, ticks=12617/12745, in_queue=25362, util=89.22% 00:09:48.144 nvme0n4: ios=3584/3846, merge=0/0, ticks=17098/15732, in_queue=32830, util=89.88% 00:09:48.144 17:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:48.144 [global] 00:09:48.144 thread=1 00:09:48.144 invalidate=1 00:09:48.144 rw=randwrite 00:09:48.144 time_based=1 00:09:48.144 runtime=1 00:09:48.144 ioengine=libaio 00:09:48.144 direct=1 00:09:48.144 bs=4096 00:09:48.144 iodepth=128 00:09:48.144 norandommap=0 00:09:48.144 numjobs=1 00:09:48.144 00:09:48.144 verify_dump=1 00:09:48.144 verify_backlog=512 00:09:48.144 verify_state_save=0 00:09:48.144 do_verify=1 00:09:48.144 verify=crc32c-intel 00:09:48.144 [job0] 00:09:48.144 filename=/dev/nvme0n1 00:09:48.144 [job1] 00:09:48.144 filename=/dev/nvme0n2 00:09:48.144 [job2] 00:09:48.144 filename=/dev/nvme0n3 00:09:48.144 [job3] 00:09:48.144 filename=/dev/nvme0n4 00:09:48.144 Could not set queue depth (nvme0n1) 00:09:48.144 Could not set queue depth (nvme0n2) 00:09:48.144 Could not set queue depth (nvme0n3) 00:09:48.144 Could not set queue depth (nvme0n4) 00:09:48.144 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.144 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.144 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.144 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.144 fio-3.35 00:09:48.144 Starting 4 threads 00:09:49.521 00:09:49.521 job0: (groupid=0, jobs=1): err= 0: pid=70153: Fri Dec 6 17:09:31 2024 00:09:49.521 read: IOPS=4895, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1004msec) 00:09:49.521 slat (usec): min=4, max=12717, avg=109.72, stdev=709.83 00:09:49.521 clat (usec): min=3418, max=27334, avg=13580.58, stdev=3765.97 00:09:49.521 lat (usec): min=3426, max=27346, avg=13690.30, stdev=3798.07 00:09:49.521 clat percentiles (usec): 00:09:49.521 | 1.00th=[ 5276], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10814], 00:09:49.521 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12780], 60.00th=[13304], 00:09:49.521 | 70.00th=[14484], 80.00th=[16057], 90.00th=[19006], 95.00th=[21365], 00:09:49.521 | 99.00th=[24511], 99.50th=[25035], 99.90th=[27395], 99.95th=[27395], 00:09:49.521 | 99.99th=[27395] 00:09:49.521 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:09:49.521 slat (usec): min=5, max=7634, avg=82.72, stdev=283.43 00:09:49.521 clat (usec): min=4013, max=27310, avg=11812.22, stdev=2756.73 00:09:49.521 lat (usec): min=4128, max=27318, avg=11894.94, stdev=2776.17 00:09:49.521 clat percentiles (usec): 00:09:49.521 | 1.00th=[ 4555], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[ 9896], 00:09:49.521 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[13042], 00:09:49.521 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[15401], 00:09:49.521 | 99.00th=[18482], 99.50th=[18744], 99.90th=[24773], 99.95th=[24773], 00:09:49.521 | 99.99th=[27395] 00:09:49.521 bw ( KiB/s): min=19536, max=21466, per=32.85%, avg=20501.00, stdev=1364.72, samples=2 00:09:49.521 iops : min= 4884, max= 5366, avg=5125.00, stdev=340.83, samples=2 00:09:49.521 lat (msec) : 4=0.19%, 10=15.22%, 20=80.71%, 50=3.89% 00:09:49.521 cpu : usr=4.99%, sys=10.77%, ctx=769, majf=0, minf=13 00:09:49.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:49.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.521 issued rwts: total=4915,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.521 job1: (groupid=0, jobs=1): err= 0: pid=70154: Fri Dec 6 17:09:31 2024 00:09:49.521 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:09:49.521 slat (usec): min=5, max=14140, avg=137.80, stdev=852.63 00:09:49.521 clat (usec): min=5903, max=38821, avg=16271.82, stdev=6096.17 00:09:49.521 lat (usec): min=5914, max=38859, avg=16409.62, stdev=6150.82 00:09:49.521 clat percentiles (usec): 00:09:49.521 | 1.00th=[ 6325], 5.00th=[10028], 10.00th=[10683], 20.00th=[11469], 00:09:49.521 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13435], 60.00th=[16581], 00:09:49.521 | 70.00th=[17433], 80.00th=[19792], 90.00th=[27395], 95.00th=[28705], 00:09:49.521 | 99.00th=[32900], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:09:49.521 | 99.99th=[39060] 00:09:49.521 write: IOPS=2967, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1013msec); 0 zone resets 00:09:49.521 slat (usec): min=4, max=22486, avg=206.23, stdev=964.27 00:09:49.521 clat (usec): min=4075, max=72231, avg=28883.91, stdev=13578.65 00:09:49.521 lat (usec): min=4097, max=72242, avg=29090.14, stdev=13660.20 00:09:49.521 clat percentiles (usec): 00:09:49.521 | 1.00th=[ 5669], 5.00th=[10552], 10.00th=[15008], 20.00th=[19530], 00:09:49.521 | 30.00th=[22676], 40.00th=[23462], 50.00th=[24511], 60.00th=[25035], 00:09:49.521 | 70.00th=[31065], 80.00th=[41681], 90.00th=[51119], 95.00th=[56886], 00:09:49.521 | 99.00th=[63177], 99.50th=[68682], 99.90th=[71828], 99.95th=[71828], 00:09:49.521 | 99.99th=[71828] 00:09:49.521 bw ( KiB/s): min=10744, max=12312, per=18.47%, avg=11528.00, stdev=1108.74, samples=2 00:09:49.521 iops : min= 2686, max= 3078, avg=2882.00, stdev=277.19, samples=2 00:09:49.521 lat (msec) : 10=4.98%, 20=44.29%, 50=44.84%, 100=5.89% 00:09:49.521 cpu : usr=2.77%, sys=7.02%, ctx=436, majf=0, minf=15 00:09:49.521 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:49.521 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.521 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.521 issued rwts: total=2560,3006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.521 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.521 job2: (groupid=0, jobs=1): err= 0: pid=70155: Fri Dec 6 17:09:31 2024 00:09:49.521 read: IOPS=4667, BW=18.2MiB/s (19.1MB/s)(18.4MiB/1010msec) 00:09:49.521 slat (usec): min=3, max=12155, avg=109.63, stdev=729.23 00:09:49.521 clat (usec): min=4134, max=25435, avg=13913.91, stdev=3257.41 00:09:49.521 lat (usec): min=5185, max=25453, avg=14023.55, stdev=3297.73 00:09:49.521 clat percentiles (usec): 00:09:49.521 | 1.00th=[ 7570], 5.00th=[10028], 10.00th=[10683], 20.00th=[11863], 00:09:49.522 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:09:49.522 | 70.00th=[14222], 80.00th=[16057], 90.00th=[18482], 95.00th=[21365], 00:09:49.522 | 99.00th=[24249], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:09:49.522 | 99.99th=[25560] 00:09:49.522 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec); 0 zone resets 00:09:49.522 slat (usec): min=4, max=10128, avg=87.36, stdev=498.44 00:09:49.522 clat (usec): min=3405, max=25356, avg=12183.08, stdev=2260.16 00:09:49.522 lat (usec): min=3425, max=25368, avg=12270.44, stdev=2315.19 00:09:49.522 clat percentiles (usec): 00:09:49.522 | 1.00th=[ 5276], 5.00th=[ 7046], 10.00th=[ 8717], 20.00th=[11076], 00:09:49.522 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12911], 60.00th=[13304], 00:09:49.522 | 70.00th=[13566], 80.00th=[13698], 90.00th=[13960], 95.00th=[14091], 00:09:49.522 | 99.00th=[14484], 99.50th=[19006], 99.90th=[25035], 99.95th=[25035], 00:09:49.522 | 99.99th=[25297] 00:09:49.522 bw ( KiB/s): min=20304, max=20521, per=32.70%, avg=20412.50, stdev=153.44, samples=2 00:09:49.522 iops : min= 5076, max= 5130, avg=5103.00, stdev=38.18, samples=2 00:09:49.522 lat (msec) : 4=0.07%, 10=8.63%, 20=87.67%, 50=3.63% 00:09:49.522 cpu : usr=4.56%, sys=13.18%, ctx=601, majf=0, minf=10 00:09:49.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:49.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.522 issued rwts: total=4714,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.522 job3: (groupid=0, jobs=1): err= 0: pid=70156: Fri Dec 6 17:09:31 2024 00:09:49.522 read: IOPS=2479, BW=9916KiB/s (10.2MB/s)(9956KiB/1004msec) 00:09:49.522 slat (usec): min=4, max=9991, avg=151.29, stdev=746.72 00:09:49.522 clat (usec): min=3035, max=45276, avg=17096.41, stdev=4754.14 00:09:49.522 lat (usec): min=6197, max=45290, avg=17247.70, stdev=4821.93 00:09:49.522 clat percentiles (usec): 00:09:49.522 | 1.00th=[10945], 5.00th=[12125], 10.00th=[13829], 20.00th=[15008], 00:09:49.522 | 30.00th=[15139], 40.00th=[15533], 50.00th=[15795], 60.00th=[16188], 00:09:49.522 | 70.00th=[16581], 80.00th=[18482], 90.00th=[21627], 95.00th=[26346], 00:09:49.522 | 99.00th=[36963], 99.50th=[39584], 99.90th=[45351], 99.95th=[45351], 00:09:49.522 | 99.99th=[45351] 00:09:49.522 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:09:49.522 slat (usec): min=10, max=18784, avg=235.52, stdev=1114.05 00:09:49.522 clat (usec): min=11813, max=67270, avg=32960.60, stdev=14057.53 00:09:49.522 lat (usec): min=11834, max=67301, avg=33196.12, stdev=14130.24 00:09:49.522 clat percentiles (usec): 00:09:49.522 | 1.00th=[14222], 5.00th=[18482], 10.00th=[21627], 20.00th=[23200], 00:09:49.522 | 30.00th=[23725], 40.00th=[24511], 50.00th=[25035], 60.00th=[28967], 00:09:49.522 | 70.00th=[38536], 80.00th=[44303], 90.00th=[61080], 95.00th=[63701], 00:09:49.522 | 99.00th=[64226], 99.50th=[65799], 99.90th=[67634], 99.95th=[67634], 00:09:49.522 | 99.99th=[67634] 00:09:49.522 bw ( KiB/s): min= 9672, max=10808, per=16.41%, avg=10240.00, stdev=803.27, samples=2 00:09:49.522 iops : min= 2418, max= 2702, avg=2560.00, stdev=200.82, samples=2 00:09:49.522 lat (msec) : 4=0.02%, 10=0.30%, 20=45.99%, 50=45.34%, 100=8.36% 00:09:49.522 cpu : usr=2.89%, sys=6.88%, ctx=352, majf=0, minf=15 00:09:49.522 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:49.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:49.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:49.522 issued rwts: total=2489,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:49.522 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:49.522 00:09:49.522 Run status group 0 (all jobs): 00:09:49.522 READ: bw=56.6MiB/s (59.3MB/s), 9916KiB/s-19.1MiB/s (10.2MB/s-20.1MB/s), io=57.3MiB (60.1MB), run=1004-1013msec 00:09:49.522 WRITE: bw=60.9MiB/s (63.9MB/s), 9.96MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=61.7MiB (64.7MB), run=1004-1013msec 00:09:49.522 00:09:49.522 Disk stats (read/write): 00:09:49.522 nvme0n1: ios=4146/4503, merge=0/0, ticks=52327/51762, in_queue=104089, util=87.88% 00:09:49.522 nvme0n2: ios=2161/2560, merge=0/0, ticks=33597/71445, in_queue=105042, util=89.18% 00:09:49.522 nvme0n3: ios=4098/4214, merge=0/0, ticks=54122/49015, in_queue=103137, util=88.88% 00:09:49.522 nvme0n4: ios=2048/2295, merge=0/0, ticks=17704/34179, in_queue=51883, util=89.81% 00:09:49.522 17:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:49.522 17:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=70169 00:09:49.522 17:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:49.522 17:09:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:49.522 [global] 00:09:49.522 thread=1 00:09:49.522 invalidate=1 00:09:49.522 rw=read 00:09:49.522 time_based=1 00:09:49.522 runtime=10 00:09:49.522 ioengine=libaio 00:09:49.522 direct=1 00:09:49.522 bs=4096 00:09:49.522 iodepth=1 00:09:49.522 norandommap=1 00:09:49.522 numjobs=1 00:09:49.522 00:09:49.522 [job0] 00:09:49.522 filename=/dev/nvme0n1 00:09:49.522 [job1] 00:09:49.522 filename=/dev/nvme0n2 00:09:49.522 [job2] 00:09:49.522 filename=/dev/nvme0n3 00:09:49.522 [job3] 00:09:49.522 filename=/dev/nvme0n4 00:09:49.522 Could not set queue depth (nvme0n1) 00:09:49.522 Could not set queue depth (nvme0n2) 00:09:49.522 Could not set queue depth (nvme0n3) 00:09:49.522 Could not set queue depth (nvme0n4) 00:09:49.522 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.522 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.522 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.522 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:49.522 fio-3.35 00:09:49.522 Starting 4 threads 00:09:52.803 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:52.803 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=55058432, buflen=4096 00:09:52.803 fio: pid=70213, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.803 17:09:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:52.803 fio: pid=70211, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:52.803 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=39993344, buflen=4096 00:09:53.061 17:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.061 17:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:53.062 fio: pid=70209, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:53.062 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3006464, buflen=4096 00:09:53.319 17:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.319 17:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:53.576 fio: pid=70210, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:53.576 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=50130944, buflen=4096 00:09:53.576 00:09:53.576 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70209: Fri Dec 6 17:09:35 2024 00:09:53.576 read: IOPS=4835, BW=18.9MiB/s (19.8MB/s)(66.9MiB/3540msec) 00:09:53.576 slat (usec): min=9, max=9545, avg=20.68, stdev=134.79 00:09:53.576 clat (usec): min=4, max=3729, avg=184.39, stdev=63.99 00:09:53.576 lat (usec): min=148, max=9878, avg=205.07, stdev=150.72 00:09:53.576 clat percentiles (usec): 00:09:53.576 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:53.576 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 172], 00:09:53.576 | 70.00th=[ 180], 80.00th=[ 210], 90.00th=[ 243], 95.00th=[ 269], 00:09:53.576 | 99.00th=[ 396], 99.50th=[ 429], 99.90th=[ 619], 99.95th=[ 816], 00:09:53.576 | 99.99th=[ 1958] 00:09:53.576 bw ( KiB/s): min=15112, max=22104, per=37.29%, avg=20246.67, stdev=2600.04, samples=6 00:09:53.576 iops : min= 3778, max= 5526, avg=5061.67, stdev=650.01, samples=6 00:09:53.576 lat (usec) : 10=0.01%, 50=0.01%, 250=91.51%, 500=8.21%, 750=0.19% 00:09:53.576 lat (usec) : 1000=0.04% 00:09:53.576 lat (msec) : 2=0.02%, 4=0.01% 00:09:53.576 cpu : usr=1.58%, sys=7.46%, ctx=17138, majf=0, minf=1 00:09:53.576 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.576 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.576 issued rwts: total=17119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.576 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.576 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70210: Fri Dec 6 17:09:35 2024 00:09:53.576 read: IOPS=3160, BW=12.3MiB/s (12.9MB/s)(47.8MiB/3873msec) 00:09:53.576 slat (usec): min=7, max=11450, avg=27.07, stdev=207.90 00:09:53.576 clat (usec): min=109, max=4982, avg=287.18, stdev=115.94 00:09:53.577 lat (usec): min=143, max=11677, avg=314.26, stdev=239.92 00:09:53.577 clat percentiles (usec): 00:09:53.577 | 1.00th=[ 137], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 233], 00:09:53.577 | 30.00th=[ 269], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 302], 00:09:53.577 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 371], 95.00th=[ 416], 00:09:53.577 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 1020], 99.95th=[ 2933], 00:09:53.577 | 99.99th=[ 4178] 00:09:53.577 bw ( KiB/s): min=10200, max=15183, per=22.12%, avg=12009.00, stdev=1575.53, samples=7 00:09:53.577 iops : min= 2550, max= 3795, avg=3002.14, stdev=393.63, samples=7 00:09:53.577 lat (usec) : 250=24.88%, 500=73.95%, 750=0.97%, 1000=0.07% 00:09:53.577 lat (msec) : 2=0.05%, 4=0.05%, 10=0.02% 00:09:53.577 cpu : usr=1.37%, sys=5.79%, ctx=12271, majf=0, minf=1 00:09:53.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.577 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.577 issued rwts: total=12240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.577 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70211: Fri Dec 6 17:09:35 2024 00:09:53.577 read: IOPS=2987, BW=11.7MiB/s (12.2MB/s)(38.1MiB/3269msec) 00:09:53.577 slat (usec): min=7, max=11478, avg=28.09, stdev=142.58 00:09:53.577 clat (usec): min=112, max=2926, avg=303.82, stdev=77.07 00:09:53.577 lat (usec): min=169, max=11763, avg=331.90, stdev=163.27 00:09:53.577 clat percentiles (usec): 00:09:53.577 | 1.00th=[ 163], 5.00th=[ 174], 10.00th=[ 190], 20.00th=[ 269], 00:09:53.577 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 310], 00:09:53.577 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 383], 95.00th=[ 433], 00:09:53.577 | 99.00th=[ 515], 99.50th=[ 545], 99.90th=[ 685], 99.95th=[ 717], 00:09:53.577 | 99.99th=[ 2933] 00:09:53.577 bw ( KiB/s): min= 9928, max=12424, per=21.10%, avg=11456.00, stdev=823.09, samples=6 00:09:53.577 iops : min= 2482, max= 3106, avg=2864.00, stdev=205.77, samples=6 00:09:53.577 lat (usec) : 250=13.18%, 500=85.24%, 750=1.53% 00:09:53.577 lat (msec) : 2=0.03%, 4=0.01% 00:09:53.577 cpu : usr=1.81%, sys=6.46%, ctx=9787, majf=0, minf=2 00:09:53.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.577 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.577 issued rwts: total=9765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.577 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=70213: Fri Dec 6 17:09:35 2024 00:09:53.577 read: IOPS=4536, BW=17.7MiB/s (18.6MB/s)(52.5MiB/2963msec) 00:09:53.577 slat (nsec): min=13939, max=86425, avg=21585.34, stdev=6729.01 00:09:53.577 clat (usec): min=153, max=2481, avg=196.81, stdev=39.47 00:09:53.577 lat (usec): min=169, max=2514, avg=218.39, stdev=41.19 00:09:53.577 clat percentiles (usec): 00:09:53.577 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:09:53.577 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:09:53.577 | 70.00th=[ 202], 80.00th=[ 212], 90.00th=[ 231], 95.00th=[ 251], 00:09:53.577 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 408], 99.95th=[ 545], 00:09:53.577 | 99.99th=[ 2114] 00:09:53.577 bw ( KiB/s): min=17232, max=19344, per=33.75%, avg=18324.80, stdev=891.28, samples=5 00:09:53.577 iops : min= 4308, max= 4836, avg=4581.20, stdev=222.82, samples=5 00:09:53.577 lat (usec) : 250=95.01%, 500=4.91%, 750=0.04%, 1000=0.01% 00:09:53.577 lat (msec) : 2=0.01%, 4=0.01% 00:09:53.577 cpu : usr=1.89%, sys=7.87%, ctx=13453, majf=0, minf=2 00:09:53.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:53.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.577 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.577 issued rwts: total=13443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:53.577 00:09:53.577 Run status group 0 (all jobs): 00:09:53.577 READ: bw=53.0MiB/s (55.6MB/s), 11.7MiB/s-18.9MiB/s (12.2MB/s-19.8MB/s), io=205MiB (215MB), run=2963-3873msec 00:09:53.577 00:09:53.577 Disk stats (read/write): 00:09:53.577 nvme0n1: ios=16452/0, merge=0/0, ticks=3070/0, in_queue=3070, util=95.51% 00:09:53.577 nvme0n2: ios=10971/0, merge=0/0, ticks=3361/0, in_queue=3361, util=95.46% 00:09:53.577 nvme0n3: ios=9025/0, merge=0/0, ticks=2859/0, in_queue=2859, util=96.25% 00:09:53.577 nvme0n4: ios=13006/0, merge=0/0, ticks=2629/0, in_queue=2629, util=96.70% 00:09:53.577 17:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.577 17:09:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:53.834 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:53.834 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:54.399 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.399 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:54.658 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.658 17:09:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:54.917 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:54.917 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 70169 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.176 nvmf hotplug test: fio failed as expected 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:55.176 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.744 rmmod nvme_tcp 00:09:55.744 rmmod nvme_fabrics 00:09:55.744 rmmod nvme_keyring 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69681 ']' 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69681 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69681 ']' 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69681 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69681 00:09:55.744 killing process with pid 69681 00:09:55.744 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.745 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.745 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69681' 00:09:55.745 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69681 00:09:55.745 17:09:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69681 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:56.003 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:56.004 00:09:56.004 real 0m20.805s 00:09:56.004 user 1m19.652s 00:09:56.004 sys 0m9.622s 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.004 ************************************ 00:09:56.004 END TEST nvmf_fio_target 00:09:56.004 ************************************ 00:09:56.004 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.263 17:09:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.264 ************************************ 00:09:56.264 START TEST nvmf_bdevio 00:09:56.264 ************************************ 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:56.264 * Looking for test storage... 00:09:56.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:56.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.264 --rc genhtml_branch_coverage=1 00:09:56.264 --rc genhtml_function_coverage=1 00:09:56.264 --rc genhtml_legend=1 00:09:56.264 --rc geninfo_all_blocks=1 00:09:56.264 --rc geninfo_unexecuted_blocks=1 00:09:56.264 00:09:56.264 ' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:56.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.264 --rc genhtml_branch_coverage=1 00:09:56.264 --rc genhtml_function_coverage=1 00:09:56.264 --rc genhtml_legend=1 00:09:56.264 --rc geninfo_all_blocks=1 00:09:56.264 --rc geninfo_unexecuted_blocks=1 00:09:56.264 00:09:56.264 ' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:56.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.264 --rc genhtml_branch_coverage=1 00:09:56.264 --rc genhtml_function_coverage=1 00:09:56.264 --rc genhtml_legend=1 00:09:56.264 --rc geninfo_all_blocks=1 00:09:56.264 --rc geninfo_unexecuted_blocks=1 00:09:56.264 00:09:56.264 ' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:56.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.264 --rc genhtml_branch_coverage=1 00:09:56.264 --rc genhtml_function_coverage=1 00:09:56.264 --rc genhtml_legend=1 00:09:56.264 --rc geninfo_all_blocks=1 00:09:56.264 --rc geninfo_unexecuted_blocks=1 00:09:56.264 00:09:56.264 ' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.264 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.265 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:56.265 Cannot find device "nvmf_init_br" 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:56.265 Cannot find device "nvmf_init_br2" 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:56.265 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:56.525 Cannot find device "nvmf_tgt_br" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:56.525 Cannot find device "nvmf_tgt_br2" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:56.525 Cannot find device "nvmf_init_br" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:56.525 Cannot find device "nvmf_init_br2" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:56.525 Cannot find device "nvmf_tgt_br" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:56.525 Cannot find device "nvmf_tgt_br2" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:56.525 Cannot find device "nvmf_br" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:56.525 Cannot find device "nvmf_init_if" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:56.525 Cannot find device "nvmf_init_if2" 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:56.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:56.525 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:56.525 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:56.785 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:56.785 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.159 ms 00:09:56.785 00:09:56.785 --- 10.0.0.3 ping statistics --- 00:09:56.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.785 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:56.785 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:56.785 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:09:56.785 00:09:56.785 --- 10.0.0.4 ping statistics --- 00:09:56.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.785 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:56.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:56.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:56.785 00:09:56.785 --- 10.0.0.1 ping statistics --- 00:09:56.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.785 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:56.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:56.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:56.785 00:09:56.785 --- 10.0.0.2 ping statistics --- 00:09:56.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:56.785 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70602 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70602 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70602 ']' 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.785 17:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:56.785 [2024-12-06 17:09:39.039097] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:09:56.785 [2024-12-06 17:09:39.039213] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.043 [2024-12-06 17:09:39.192932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.043 [2024-12-06 17:09:39.234055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.043 [2024-12-06 17:09:39.234126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.043 [2024-12-06 17:09:39.234141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.043 [2024-12-06 17:09:39.234152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.043 [2024-12-06 17:09:39.234160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.043 [2024-12-06 17:09:39.235545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:57.043 [2024-12-06 17:09:39.235635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:57.043 [2024-12-06 17:09:39.235987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:57.043 [2024-12-06 17:09:39.236066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.043 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.043 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:57.044 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.044 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.044 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.302 [2024-12-06 17:09:39.373011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.302 Malloc0 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:57.302 [2024-12-06 17:09:39.441347] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:57.302 { 00:09:57.302 "params": { 00:09:57.302 "name": "Nvme$subsystem", 00:09:57.302 "trtype": "$TEST_TRANSPORT", 00:09:57.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:57.302 "adrfam": "ipv4", 00:09:57.302 "trsvcid": "$NVMF_PORT", 00:09:57.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:57.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:57.302 "hdgst": ${hdgst:-false}, 00:09:57.302 "ddgst": ${ddgst:-false} 00:09:57.302 }, 00:09:57.302 "method": "bdev_nvme_attach_controller" 00:09:57.302 } 00:09:57.302 EOF 00:09:57.302 )") 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:57.302 17:09:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:57.302 "params": { 00:09:57.302 "name": "Nvme1", 00:09:57.302 "trtype": "tcp", 00:09:57.302 "traddr": "10.0.0.3", 00:09:57.302 "adrfam": "ipv4", 00:09:57.302 "trsvcid": "4420", 00:09:57.302 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:57.302 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:57.302 "hdgst": false, 00:09:57.302 "ddgst": false 00:09:57.302 }, 00:09:57.302 "method": "bdev_nvme_attach_controller" 00:09:57.302 }' 00:09:57.302 [2024-12-06 17:09:39.505645] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:09:57.303 [2024-12-06 17:09:39.505944] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70637 ] 00:09:57.561 [2024-12-06 17:09:39.658785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:57.561 [2024-12-06 17:09:39.701504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.561 [2024-12-06 17:09:39.701587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.561 [2024-12-06 17:09:39.701594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.561 I/O targets: 00:09:57.561 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:57.561 00:09:57.561 00:09:57.561 CUnit - A unit testing framework for C - Version 2.1-3 00:09:57.561 http://cunit.sourceforge.net/ 00:09:57.561 00:09:57.561 00:09:57.561 Suite: bdevio tests on: Nvme1n1 00:09:57.820 Test: blockdev write read block ...passed 00:09:57.820 Test: blockdev write zeroes read block ...passed 00:09:57.820 Test: blockdev write zeroes read no split ...passed 00:09:57.820 Test: blockdev write zeroes read split ...passed 00:09:57.820 Test: blockdev write zeroes read split partial ...passed 00:09:57.820 Test: blockdev reset ...[2024-12-06 17:09:39.964247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:57.820 [2024-12-06 17:09:39.964396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cf1f50 (9): Bad file descriptor 00:09:57.820 [2024-12-06 17:09:39.977718] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:57.820 passed 00:09:57.820 Test: blockdev write read 8 blocks ...passed 00:09:57.820 Test: blockdev write read size > 128k ...passed 00:09:57.820 Test: blockdev write read invalid size ...passed 00:09:57.820 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:57.820 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:57.820 Test: blockdev write read max offset ...passed 00:09:57.820 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:57.820 Test: blockdev writev readv 8 blocks ...passed 00:09:57.820 Test: blockdev writev readv 30 x 1block ...passed 00:09:58.078 Test: blockdev writev readv block ...passed 00:09:58.078 Test: blockdev writev readv size > 128k ...passed 00:09:58.078 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:58.078 Test: blockdev comparev and writev ...[2024-12-06 17:09:40.150566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.079 [2024-12-06 17:09:40.150618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.150651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.079 [2024-12-06 17:09:40.150663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.151226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.079 [2024-12-06 17:09:40.151256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.151288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.079 [2024-12-06 17:09:40.151300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.151811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.079 [2024-12-06 17:09:40.151857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.151889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.079 [2024-12-06 17:09:40.151907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.152410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.079 [2024-12-06 17:09:40.152442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.152461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:58.079 [2024-12-06 17:09:40.152477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:58.079 passed 00:09:58.079 Test: blockdev nvme passthru rw ...passed 00:09:58.079 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:09:40.236754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.079 [2024-12-06 17:09:40.236790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.236969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.079 [2024-12-06 17:09:40.236991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.237153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.079 [2024-12-06 17:09:40.237177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:58.079 [2024-12-06 17:09:40.237358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:58.079 [2024-12-06 17:09:40.237379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:58.079 passed 00:09:58.079 Test: blockdev nvme admin passthru ...passed 00:09:58.079 Test: blockdev copy ...passed 00:09:58.079 00:09:58.079 Run Summary: Type Total Ran Passed Failed Inactive 00:09:58.079 suites 1 1 n/a 0 0 00:09:58.079 tests 23 23 23 0 0 00:09:58.079 asserts 152 152 152 0 n/a 00:09:58.079 00:09:58.079 Elapsed time = 0.890 seconds 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.337 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:58.337 rmmod nvme_tcp 00:09:58.337 rmmod nvme_fabrics 00:09:58.337 rmmod nvme_keyring 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70602 ']' 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70602 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70602 ']' 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70602 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70602 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:58.338 killing process with pid 70602 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70602' 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70602 00:09:58.338 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70602 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:58.596 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:58.855 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:58.855 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:58.855 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:58.856 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.856 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:58.856 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:58.856 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:58.856 ************************************ 00:09:58.856 END TEST nvmf_bdevio 00:09:58.856 ************************************ 00:09:58.856 00:09:58.856 real 0m2.658s 00:09:58.856 user 0m7.807s 00:09:58.856 sys 0m0.768s 00:09:58.856 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.856 17:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 17:09:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:58.856 00:09:58.856 real 3m32.145s 00:09:58.856 user 11m14.208s 00:09:58.856 sys 1m0.760s 00:09:58.856 17:09:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.856 17:09:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 ************************************ 00:09:58.856 END TEST nvmf_target_core 00:09:58.856 ************************************ 00:09:58.856 17:09:41 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:58.856 17:09:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:58.856 17:09:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.856 17:09:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:58.856 ************************************ 00:09:58.856 START TEST nvmf_target_extra 00:09:58.856 ************************************ 00:09:58.856 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:58.856 * Looking for test storage... 00:09:58.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:58.856 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:58.856 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:58.856 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:59.115 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:59.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.116 --rc genhtml_branch_coverage=1 00:09:59.116 --rc genhtml_function_coverage=1 00:09:59.116 --rc genhtml_legend=1 00:09:59.116 --rc geninfo_all_blocks=1 00:09:59.116 --rc geninfo_unexecuted_blocks=1 00:09:59.116 00:09:59.116 ' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:59.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.116 --rc genhtml_branch_coverage=1 00:09:59.116 --rc genhtml_function_coverage=1 00:09:59.116 --rc genhtml_legend=1 00:09:59.116 --rc geninfo_all_blocks=1 00:09:59.116 --rc geninfo_unexecuted_blocks=1 00:09:59.116 00:09:59.116 ' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:59.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.116 --rc genhtml_branch_coverage=1 00:09:59.116 --rc genhtml_function_coverage=1 00:09:59.116 --rc genhtml_legend=1 00:09:59.116 --rc geninfo_all_blocks=1 00:09:59.116 --rc geninfo_unexecuted_blocks=1 00:09:59.116 00:09:59.116 ' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:59.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.116 --rc genhtml_branch_coverage=1 00:09:59.116 --rc genhtml_function_coverage=1 00:09:59.116 --rc genhtml_legend=1 00:09:59.116 --rc geninfo_all_blocks=1 00:09:59.116 --rc geninfo_unexecuted_blocks=1 00:09:59.116 00:09:59.116 ' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.116 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:59.116 ************************************ 00:09:59.116 START TEST nvmf_example 00:09:59.116 ************************************ 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:59.116 * Looking for test storage... 00:09:59.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:59.116 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.376 --rc genhtml_branch_coverage=1 00:09:59.376 --rc genhtml_function_coverage=1 00:09:59.376 --rc genhtml_legend=1 00:09:59.376 --rc geninfo_all_blocks=1 00:09:59.376 --rc geninfo_unexecuted_blocks=1 00:09:59.376 00:09:59.376 ' 00:09:59.376 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.376 --rc genhtml_branch_coverage=1 00:09:59.376 --rc genhtml_function_coverage=1 00:09:59.376 --rc genhtml_legend=1 00:09:59.377 --rc geninfo_all_blocks=1 00:09:59.377 --rc geninfo_unexecuted_blocks=1 00:09:59.377 00:09:59.377 ' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:59.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.377 --rc genhtml_branch_coverage=1 00:09:59.377 --rc genhtml_function_coverage=1 00:09:59.377 --rc genhtml_legend=1 00:09:59.377 --rc geninfo_all_blocks=1 00:09:59.377 --rc geninfo_unexecuted_blocks=1 00:09:59.377 00:09:59.377 ' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:59.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.377 --rc genhtml_branch_coverage=1 00:09:59.377 --rc genhtml_function_coverage=1 00:09:59.377 --rc genhtml_legend=1 00:09:59.377 --rc geninfo_all_blocks=1 00:09:59.377 --rc geninfo_unexecuted_blocks=1 00:09:59.377 00:09:59.377 ' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.377 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:59.377 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:59.378 Cannot find device "nvmf_init_br" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:59.378 Cannot find device "nvmf_init_br2" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:59.378 Cannot find device "nvmf_tgt_br" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.378 Cannot find device "nvmf_tgt_br2" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:59.378 Cannot find device "nvmf_init_br" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:59.378 Cannot find device "nvmf_init_br2" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:59.378 Cannot find device "nvmf_tgt_br" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:59.378 Cannot find device "nvmf_tgt_br2" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:59.378 Cannot find device "nvmf_br" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:59.378 Cannot find device "nvmf_init_if" 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:09:59.378 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:59.636 Cannot find device "nvmf_init_if2" 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:59.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:59.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:59.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:59.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:09:59.636 00:09:59.636 --- 10.0.0.3 ping statistics --- 00:09:59.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.636 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:59.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:59.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:59.636 00:09:59.636 --- 10.0.0.4 ping statistics --- 00:09:59.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.636 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:59.636 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:59.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:09:59.894 00:09:59.894 --- 10.0.0.1 ping statistics --- 00:09:59.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.894 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:59.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:59.894 00:09:59.894 --- 10.0.0.2 ping statistics --- 00:09:59.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.894 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=70928 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 70928 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 70928 ']' 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.894 17:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:00.831 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.831 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:00.831 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:00.831 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:00.831 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:00.831 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.831 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.831 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:10:01.089 17:09:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:13.304 Initializing NVMe Controllers 00:10:13.304 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:13.304 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:13.304 Initialization complete. Launching workers. 00:10:13.304 ======================================================== 00:10:13.304 Latency(us) 00:10:13.304 Device Information : IOPS MiB/s Average min max 00:10:13.304 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14702.63 57.43 4352.31 696.42 21916.89 00:10:13.304 ======================================================== 00:10:13.304 Total : 14702.63 57.43 4352.31 696.42 21916.89 00:10:13.304 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.304 rmmod nvme_tcp 00:10:13.304 rmmod nvme_fabrics 00:10:13.304 rmmod nvme_keyring 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 70928 ']' 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 70928 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 70928 ']' 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 70928 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70928 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70928' 00:10:13.304 killing process with pid 70928 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 70928 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 70928 00:10:13.304 nvmf threads initialize successfully 00:10:13.304 bdev subsystem init successfully 00:10:13.304 created a nvmf target service 00:10:13.304 create targets's poll groups done 00:10:13.304 all subsystems of target started 00:10:13.304 nvmf target is running 00:10:13.304 all subsystems of target stopped 00:10:13.304 destroy targets's poll groups done 00:10:13.304 destroyed the nvmf target service 00:10:13.304 bdev subsystem finish successfully 00:10:13.304 nvmf threads destroy successfully 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:13.304 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.305 17:09:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:13.305 ************************************ 00:10:13.305 END TEST nvmf_example 00:10:13.305 ************************************ 00:10:13.305 00:10:13.305 real 0m12.775s 00:10:13.305 user 0m44.876s 00:10:13.305 sys 0m2.130s 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:13.305 ************************************ 00:10:13.305 START TEST nvmf_filesystem 00:10:13.305 ************************************ 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:13.305 * Looking for test storage... 00:10:13.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.305 --rc genhtml_branch_coverage=1 00:10:13.305 --rc genhtml_function_coverage=1 00:10:13.305 --rc genhtml_legend=1 00:10:13.305 --rc geninfo_all_blocks=1 00:10:13.305 --rc geninfo_unexecuted_blocks=1 00:10:13.305 00:10:13.305 ' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.305 --rc genhtml_branch_coverage=1 00:10:13.305 --rc genhtml_function_coverage=1 00:10:13.305 --rc genhtml_legend=1 00:10:13.305 --rc geninfo_all_blocks=1 00:10:13.305 --rc geninfo_unexecuted_blocks=1 00:10:13.305 00:10:13.305 ' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.305 --rc genhtml_branch_coverage=1 00:10:13.305 --rc genhtml_function_coverage=1 00:10:13.305 --rc genhtml_legend=1 00:10:13.305 --rc geninfo_all_blocks=1 00:10:13.305 --rc geninfo_unexecuted_blocks=1 00:10:13.305 00:10:13.305 ' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.305 --rc genhtml_branch_coverage=1 00:10:13.305 --rc genhtml_function_coverage=1 00:10:13.305 --rc genhtml_legend=1 00:10:13.305 --rc geninfo_all_blocks=1 00:10:13.305 --rc geninfo_unexecuted_blocks=1 00:10:13.305 00:10:13.305 ' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:13.305 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:10:13.306 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:13.306 #define SPDK_CONFIG_H 00:10:13.306 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:13.306 #define SPDK_CONFIG_APPS 1 00:10:13.306 #define SPDK_CONFIG_ARCH native 00:10:13.306 #undef SPDK_CONFIG_ASAN 00:10:13.306 #define SPDK_CONFIG_AVAHI 1 00:10:13.306 #undef SPDK_CONFIG_CET 00:10:13.306 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:13.306 #define SPDK_CONFIG_COVERAGE 1 00:10:13.306 #define SPDK_CONFIG_CROSS_PREFIX 00:10:13.306 #undef SPDK_CONFIG_CRYPTO 00:10:13.306 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:13.306 #undef SPDK_CONFIG_CUSTOMOCF 00:10:13.306 #undef SPDK_CONFIG_DAOS 00:10:13.306 #define SPDK_CONFIG_DAOS_DIR 00:10:13.306 #define SPDK_CONFIG_DEBUG 1 00:10:13.306 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:13.307 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:13.307 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:13.307 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:13.307 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:13.307 #undef SPDK_CONFIG_DPDK_UADK 00:10:13.307 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:10:13.307 #define SPDK_CONFIG_EXAMPLES 1 00:10:13.307 #undef SPDK_CONFIG_FC 00:10:13.307 #define SPDK_CONFIG_FC_PATH 00:10:13.307 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:13.307 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:13.307 #define SPDK_CONFIG_FSDEV 1 00:10:13.307 #undef SPDK_CONFIG_FUSE 00:10:13.307 #undef SPDK_CONFIG_FUZZER 00:10:13.307 #define SPDK_CONFIG_FUZZER_LIB 00:10:13.307 #define SPDK_CONFIG_GOLANG 1 00:10:13.307 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:13.307 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:13.307 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:13.307 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:13.307 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:13.307 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:13.307 #undef SPDK_CONFIG_HAVE_LZ4 00:10:13.307 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:13.307 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:13.307 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:13.307 #define SPDK_CONFIG_IDXD 1 00:10:13.307 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:13.307 #undef SPDK_CONFIG_IPSEC_MB 00:10:13.307 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:13.307 #define SPDK_CONFIG_ISAL 1 00:10:13.307 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:13.307 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:13.307 #define SPDK_CONFIG_LIBDIR 00:10:13.307 #undef SPDK_CONFIG_LTO 00:10:13.307 #define SPDK_CONFIG_MAX_LCORES 128 00:10:13.307 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:13.307 #define SPDK_CONFIG_NVME_CUSE 1 00:10:13.307 #undef SPDK_CONFIG_OCF 00:10:13.307 #define SPDK_CONFIG_OCF_PATH 00:10:13.307 #define SPDK_CONFIG_OPENSSL_PATH 00:10:13.307 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:13.307 #define SPDK_CONFIG_PGO_DIR 00:10:13.307 #undef SPDK_CONFIG_PGO_USE 00:10:13.307 #define SPDK_CONFIG_PREFIX /usr/local 00:10:13.307 #undef SPDK_CONFIG_RAID5F 00:10:13.307 #undef SPDK_CONFIG_RBD 00:10:13.307 #define SPDK_CONFIG_RDMA 1 00:10:13.307 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:13.307 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:13.307 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:13.307 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:13.307 #define SPDK_CONFIG_SHARED 1 00:10:13.307 #undef SPDK_CONFIG_SMA 00:10:13.307 #define SPDK_CONFIG_TESTS 1 00:10:13.307 #undef SPDK_CONFIG_TSAN 00:10:13.307 #define SPDK_CONFIG_UBLK 1 00:10:13.307 #define SPDK_CONFIG_UBSAN 1 00:10:13.307 #undef SPDK_CONFIG_UNIT_TESTS 00:10:13.307 #undef SPDK_CONFIG_URING 00:10:13.307 #define SPDK_CONFIG_URING_PATH 00:10:13.307 #undef SPDK_CONFIG_URING_ZNS 00:10:13.307 #define SPDK_CONFIG_USDT 1 00:10:13.307 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:13.307 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:13.307 #undef SPDK_CONFIG_VFIO_USER 00:10:13.307 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:13.307 #define SPDK_CONFIG_VHOST 1 00:10:13.307 #define SPDK_CONFIG_VIRTIO 1 00:10:13.307 #undef SPDK_CONFIG_VTUNE 00:10:13.307 #define SPDK_CONFIG_VTUNE_DIR 00:10:13.307 #define SPDK_CONFIG_WERROR 1 00:10:13.307 #define SPDK_CONFIG_WPDK_DIR 00:10:13.307 #undef SPDK_CONFIG_XNVME 00:10:13.307 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:13.307 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:13.308 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 71200 ]] 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 71200 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:13.309 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.txFcI6 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.txFcI6/tests/target /tmp/spdk.txFcI6 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13980749824 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5588348928 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6256394240 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486431744 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13980749824 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5588348928 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6266290176 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=95004106752 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4698673152 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:13.310 * Looking for test storage... 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:10:13.310 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13980749824 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:13.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.311 --rc genhtml_branch_coverage=1 00:10:13.311 --rc genhtml_function_coverage=1 00:10:13.311 --rc genhtml_legend=1 00:10:13.311 --rc geninfo_all_blocks=1 00:10:13.311 --rc geninfo_unexecuted_blocks=1 00:10:13.311 00:10:13.311 ' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:13.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.311 --rc genhtml_branch_coverage=1 00:10:13.311 --rc genhtml_function_coverage=1 00:10:13.311 --rc genhtml_legend=1 00:10:13.311 --rc geninfo_all_blocks=1 00:10:13.311 --rc geninfo_unexecuted_blocks=1 00:10:13.311 00:10:13.311 ' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:13.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.311 --rc genhtml_branch_coverage=1 00:10:13.311 --rc genhtml_function_coverage=1 00:10:13.311 --rc genhtml_legend=1 00:10:13.311 --rc geninfo_all_blocks=1 00:10:13.311 --rc geninfo_unexecuted_blocks=1 00:10:13.311 00:10:13.311 ' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:13.311 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.311 --rc genhtml_branch_coverage=1 00:10:13.311 --rc genhtml_function_coverage=1 00:10:13.311 --rc genhtml_legend=1 00:10:13.311 --rc geninfo_all_blocks=1 00:10:13.311 --rc geninfo_unexecuted_blocks=1 00:10:13.311 00:10:13.311 ' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.311 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:13.312 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:13.312 Cannot find device "nvmf_init_br" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:13.312 Cannot find device "nvmf_init_br2" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:13.312 Cannot find device "nvmf_tgt_br" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:13.312 Cannot find device "nvmf_tgt_br2" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:13.312 Cannot find device "nvmf_init_br" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:13.312 Cannot find device "nvmf_init_br2" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:13.312 Cannot find device "nvmf_tgt_br" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:13.312 Cannot find device "nvmf_tgt_br2" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:13.312 Cannot find device "nvmf_br" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:13.312 Cannot find device "nvmf_init_if" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:13.312 Cannot find device "nvmf_init_if2" 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:13.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:13.312 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:13.312 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:13.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:13.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:10:13.313 00:10:13.313 --- 10.0.0.3 ping statistics --- 00:10:13.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.313 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:13.313 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:13.313 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:13.313 00:10:13.313 --- 10.0.0.4 ping statistics --- 00:10:13.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.313 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:13.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:13.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:10:13.313 00:10:13.313 --- 10.0.0.1 ping statistics --- 00:10:13.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.313 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:13.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:13.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:10:13.313 00:10:13.313 --- 10.0.0.2 ping statistics --- 00:10:13.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:13.313 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:13.313 17:09:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.313 ************************************ 00:10:13.313 START TEST nvmf_filesystem_no_in_capsule 00:10:13.313 ************************************ 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71384 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71384 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71384 ']' 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.313 17:09:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.313 [2024-12-06 17:09:55.087355] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:10:13.313 [2024-12-06 17:09:55.088012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.313 [2024-12-06 17:09:55.244535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:13.313 [2024-12-06 17:09:55.287860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:13.313 [2024-12-06 17:09:55.287936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:13.313 [2024-12-06 17:09:55.287962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:13.313 [2024-12-06 17:09:55.287972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:13.313 [2024-12-06 17:09:55.287981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:13.313 [2024-12-06 17:09:55.288895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.313 [2024-12-06 17:09:55.289047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:13.313 [2024-12-06 17:09:55.289165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.313 [2024-12-06 17:09:55.289170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.879 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.879 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:13.879 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.879 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.879 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.879 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.880 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:13.880 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:13.880 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.880 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.880 [2024-12-06 17:09:56.167166] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.880 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.880 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:13.880 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.880 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.139 Malloc1 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.139 [2024-12-06 17:09:56.281912] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:14.139 { 00:10:14.139 "aliases": [ 00:10:14.139 "9c19b163-c700-48e9-9d7e-6eb804649b83" 00:10:14.139 ], 00:10:14.139 "assigned_rate_limits": { 00:10:14.139 "r_mbytes_per_sec": 0, 00:10:14.139 "rw_ios_per_sec": 0, 00:10:14.139 "rw_mbytes_per_sec": 0, 00:10:14.139 "w_mbytes_per_sec": 0 00:10:14.139 }, 00:10:14.139 "block_size": 512, 00:10:14.139 "claim_type": "exclusive_write", 00:10:14.139 "claimed": true, 00:10:14.139 "driver_specific": {}, 00:10:14.139 "memory_domains": [ 00:10:14.139 { 00:10:14.139 "dma_device_id": "system", 00:10:14.139 "dma_device_type": 1 00:10:14.139 }, 00:10:14.139 { 00:10:14.139 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.139 "dma_device_type": 2 00:10:14.139 } 00:10:14.139 ], 00:10:14.139 "name": "Malloc1", 00:10:14.139 "num_blocks": 1048576, 00:10:14.139 "product_name": "Malloc disk", 00:10:14.139 "supported_io_types": { 00:10:14.139 "abort": true, 00:10:14.139 "compare": false, 00:10:14.139 "compare_and_write": false, 00:10:14.139 "copy": true, 00:10:14.139 "flush": true, 00:10:14.139 "get_zone_info": false, 00:10:14.139 "nvme_admin": false, 00:10:14.139 "nvme_io": false, 00:10:14.139 "nvme_io_md": false, 00:10:14.139 "nvme_iov_md": false, 00:10:14.139 "read": true, 00:10:14.139 "reset": true, 00:10:14.139 "seek_data": false, 00:10:14.139 "seek_hole": false, 00:10:14.139 "unmap": true, 00:10:14.139 "write": true, 00:10:14.139 "write_zeroes": true, 00:10:14.139 "zcopy": true, 00:10:14.139 "zone_append": false, 00:10:14.139 "zone_management": false 00:10:14.139 }, 00:10:14.139 "uuid": "9c19b163-c700-48e9-9d7e-6eb804649b83", 00:10:14.139 "zoned": false 00:10:14.139 } 00:10:14.139 ]' 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:14.139 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:14.398 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.398 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.398 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.398 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:14.398 17:09:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.320 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.320 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.320 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.320 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:16.320 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.320 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:16.320 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:16.320 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:16.578 17:09:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.513 ************************************ 00:10:17.513 START TEST filesystem_ext4 00:10:17.513 ************************************ 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:17.513 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:17.513 mke2fs 1.47.0 (5-Feb-2023) 00:10:17.772 Discarding device blocks: 0/522240 done 00:10:17.772 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:17.772 Filesystem UUID: 875780ec-41fc-4349-8e53-92a2c80f180e 00:10:17.772 Superblock backups stored on blocks: 00:10:17.772 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:17.772 00:10:17.772 Allocating group tables: 0/64 done 00:10:17.772 Writing inode tables: 0/64 done 00:10:17.772 Creating journal (8192 blocks): done 00:10:17.772 Writing superblocks and filesystem accounting information: 0/64 done 00:10:17.772 00:10:17.772 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:17.772 17:09:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 71384 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.064 00:10:23.064 real 0m5.511s 00:10:23.064 user 0m0.023s 00:10:23.064 sys 0m0.065s 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:23.064 ************************************ 00:10:23.064 END TEST filesystem_ext4 00:10:23.064 ************************************ 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.064 ************************************ 00:10:23.064 START TEST filesystem_btrfs 00:10:23.064 ************************************ 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:23.064 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:23.322 btrfs-progs v6.8.1 00:10:23.322 See https://btrfs.readthedocs.io for more information. 00:10:23.322 00:10:23.322 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:23.322 NOTE: several default settings have changed in version 5.15, please make sure 00:10:23.322 this does not affect your deployments: 00:10:23.322 - DUP for metadata (-m dup) 00:10:23.322 - enabled no-holes (-O no-holes) 00:10:23.322 - enabled free-space-tree (-R free-space-tree) 00:10:23.322 00:10:23.322 Label: (null) 00:10:23.322 UUID: e2335acc-8382-4796-9d12-062e52c2a34c 00:10:23.322 Node size: 16384 00:10:23.322 Sector size: 4096 (CPU page size: 4096) 00:10:23.322 Filesystem size: 510.00MiB 00:10:23.322 Block group profiles: 00:10:23.322 Data: single 8.00MiB 00:10:23.322 Metadata: DUP 32.00MiB 00:10:23.322 System: DUP 8.00MiB 00:10:23.322 SSD detected: yes 00:10:23.322 Zoned device: no 00:10:23.322 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:23.322 Checksum: crc32c 00:10:23.322 Number of devices: 1 00:10:23.322 Devices: 00:10:23.322 ID SIZE PATH 00:10:23.322 1 510.00MiB /dev/nvme0n1p1 00:10:23.322 00:10:23.322 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:23.322 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:23.322 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.322 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:23.322 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.322 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 71384 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.323 00:10:23.323 real 0m0.194s 00:10:23.323 user 0m0.022s 00:10:23.323 sys 0m0.061s 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:23.323 ************************************ 00:10:23.323 END TEST filesystem_btrfs 00:10:23.323 ************************************ 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.323 ************************************ 00:10:23.323 START TEST filesystem_xfs 00:10:23.323 ************************************ 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:23.323 17:10:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:23.323 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:23.323 = sectsz=512 attr=2, projid32bit=1 00:10:23.323 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:23.323 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:23.323 data = bsize=4096 blocks=130560, imaxpct=25 00:10:23.323 = sunit=0 swidth=0 blks 00:10:23.323 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:23.323 log =internal log bsize=4096 blocks=16384, version=2 00:10:23.323 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:23.323 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:24.323 Discarding blocks...Done. 00:10:24.323 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:24.323 17:10:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:26.848 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:26.848 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:26.848 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:26.848 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:26.848 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:26.848 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:26.848 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 71384 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:26.849 00:10:26.849 real 0m3.181s 00:10:26.849 user 0m0.024s 00:10:26.849 sys 0m0.049s 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:26.849 ************************************ 00:10:26.849 END TEST filesystem_xfs 00:10:26.849 ************************************ 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:26.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 71384 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71384 ']' 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71384 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71384 00:10:26.849 killing process with pid 71384 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71384' 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 71384 00:10:26.849 17:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 71384 00:10:27.107 ************************************ 00:10:27.107 END TEST nvmf_filesystem_no_in_capsule 00:10:27.107 ************************************ 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:27.107 00:10:27.107 real 0m14.137s 00:10:27.107 user 0m53.946s 00:10:27.107 sys 0m2.088s 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:27.107 ************************************ 00:10:27.107 START TEST nvmf_filesystem_in_capsule 00:10:27.107 ************************************ 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71751 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71751 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71751 ']' 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.107 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.107 [2024-12-06 17:10:09.253159] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:10:27.107 [2024-12-06 17:10:09.253488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.366 [2024-12-06 17:10:09.418940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.366 [2024-12-06 17:10:09.453226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.366 [2024-12-06 17:10:09.453527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.366 [2024-12-06 17:10:09.453663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.366 [2024-12-06 17:10:09.453790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.366 [2024-12-06 17:10:09.453825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.366 [2024-12-06 17:10:09.454729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.366 [2024-12-06 17:10:09.454807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.366 [2024-12-06 17:10:09.454845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.366 [2024-12-06 17:10:09.454847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.366 [2024-12-06 17:10:09.583103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.366 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.624 Malloc1 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.624 [2024-12-06 17:10:09.694534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.624 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:27.624 { 00:10:27.624 "aliases": [ 00:10:27.624 "29c5a364-166d-4359-a4b5-39b1e6683284" 00:10:27.624 ], 00:10:27.624 "assigned_rate_limits": { 00:10:27.624 "r_mbytes_per_sec": 0, 00:10:27.624 "rw_ios_per_sec": 0, 00:10:27.624 "rw_mbytes_per_sec": 0, 00:10:27.624 "w_mbytes_per_sec": 0 00:10:27.624 }, 00:10:27.624 "block_size": 512, 00:10:27.624 "claim_type": "exclusive_write", 00:10:27.624 "claimed": true, 00:10:27.624 "driver_specific": {}, 00:10:27.624 "memory_domains": [ 00:10:27.624 { 00:10:27.624 "dma_device_id": "system", 00:10:27.624 "dma_device_type": 1 00:10:27.624 }, 00:10:27.624 { 00:10:27.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.624 "dma_device_type": 2 00:10:27.624 } 00:10:27.624 ], 00:10:27.624 "name": "Malloc1", 00:10:27.624 "num_blocks": 1048576, 00:10:27.624 "product_name": "Malloc disk", 00:10:27.624 "supported_io_types": { 00:10:27.624 "abort": true, 00:10:27.624 "compare": false, 00:10:27.624 "compare_and_write": false, 00:10:27.624 "copy": true, 00:10:27.625 "flush": true, 00:10:27.625 "get_zone_info": false, 00:10:27.625 "nvme_admin": false, 00:10:27.625 "nvme_io": false, 00:10:27.625 "nvme_io_md": false, 00:10:27.625 "nvme_iov_md": false, 00:10:27.625 "read": true, 00:10:27.625 "reset": true, 00:10:27.625 "seek_data": false, 00:10:27.625 "seek_hole": false, 00:10:27.625 "unmap": true, 00:10:27.625 "write": true, 00:10:27.625 "write_zeroes": true, 00:10:27.625 "zcopy": true, 00:10:27.625 "zone_append": false, 00:10:27.625 "zone_management": false 00:10:27.625 }, 00:10:27.625 "uuid": "29c5a364-166d-4359-a4b5-39b1e6683284", 00:10:27.625 "zoned": false 00:10:27.625 } 00:10:27.625 ]' 00:10:27.625 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:27.625 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:27.625 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:27.625 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:27.625 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:27.625 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:27.625 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:27.625 17:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:27.882 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:27.882 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:27.882 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.882 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:27.882 17:10:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:29.799 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:30.058 17:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:30.992 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:30.992 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:30.992 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:30.992 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.992 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:30.992 ************************************ 00:10:30.992 START TEST filesystem_in_capsule_ext4 00:10:30.992 ************************************ 00:10:30.992 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:30.993 mke2fs 1.47.0 (5-Feb-2023) 00:10:30.993 Discarding device blocks: 0/522240 done 00:10:30.993 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:30.993 Filesystem UUID: bb08948f-7301-477b-b943-09c3e685bc8b 00:10:30.993 Superblock backups stored on blocks: 00:10:30.993 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:30.993 00:10:30.993 Allocating group tables: 0/64 done 00:10:30.993 Writing inode tables: 0/64 done 00:10:30.993 Creating journal (8192 blocks): done 00:10:30.993 Writing superblocks and filesystem accounting information: 0/64 done 00:10:30.993 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:30.993 17:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71751 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.545 ************************************ 00:10:37.545 END TEST filesystem_in_capsule_ext4 00:10:37.545 ************************************ 00:10:37.545 00:10:37.545 real 0m5.527s 00:10:37.545 user 0m0.017s 00:10:37.545 sys 0m0.061s 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.545 ************************************ 00:10:37.545 START TEST filesystem_in_capsule_btrfs 00:10:37.545 ************************************ 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:37.545 btrfs-progs v6.8.1 00:10:37.545 See https://btrfs.readthedocs.io for more information. 00:10:37.545 00:10:37.545 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:37.545 NOTE: several default settings have changed in version 5.15, please make sure 00:10:37.545 this does not affect your deployments: 00:10:37.545 - DUP for metadata (-m dup) 00:10:37.545 - enabled no-holes (-O no-holes) 00:10:37.545 - enabled free-space-tree (-R free-space-tree) 00:10:37.545 00:10:37.545 Label: (null) 00:10:37.545 UUID: 6f6bebdd-d9ca-418b-9a86-d0f083700074 00:10:37.545 Node size: 16384 00:10:37.545 Sector size: 4096 (CPU page size: 4096) 00:10:37.545 Filesystem size: 510.00MiB 00:10:37.545 Block group profiles: 00:10:37.545 Data: single 8.00MiB 00:10:37.545 Metadata: DUP 32.00MiB 00:10:37.545 System: DUP 8.00MiB 00:10:37.545 SSD detected: yes 00:10:37.545 Zoned device: no 00:10:37.545 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:37.545 Checksum: crc32c 00:10:37.545 Number of devices: 1 00:10:37.545 Devices: 00:10:37.545 ID SIZE PATH 00:10:37.545 1 510.00MiB /dev/nvme0n1p1 00:10:37.545 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71751 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:37.545 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:37.545 ************************************ 00:10:37.545 END TEST filesystem_in_capsule_btrfs 00:10:37.545 ************************************ 00:10:37.545 00:10:37.545 real 0m0.166s 00:10:37.545 user 0m0.020s 00:10:37.545 sys 0m0.051s 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:37.546 ************************************ 00:10:37.546 START TEST filesystem_in_capsule_xfs 00:10:37.546 ************************************ 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:37.546 17:10:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:37.546 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:37.546 = sectsz=512 attr=2, projid32bit=1 00:10:37.546 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:37.546 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:37.546 data = bsize=4096 blocks=130560, imaxpct=25 00:10:37.546 = sunit=0 swidth=0 blks 00:10:37.546 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:37.546 log =internal log bsize=4096 blocks=16384, version=2 00:10:37.546 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:37.546 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:37.546 Discarding blocks...Done. 00:10:37.546 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:37.546 17:10:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71751 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:39.505 ************************************ 00:10:39.505 END TEST filesystem_in_capsule_xfs 00:10:39.505 ************************************ 00:10:39.505 00:10:39.505 real 0m2.555s 00:10:39.505 user 0m0.022s 00:10:39.505 sys 0m0.047s 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71751 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71751 ']' 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71751 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71751 00:10:39.505 killing process with pid 71751 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71751' 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 71751 00:10:39.505 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 71751 00:10:39.763 ************************************ 00:10:39.763 END TEST nvmf_filesystem_in_capsule 00:10:39.763 ************************************ 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:39.763 00:10:39.763 real 0m12.728s 00:10:39.763 user 0m48.286s 00:10:39.763 sys 0m1.974s 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:39.763 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.764 17:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.764 rmmod nvme_tcp 00:10:39.764 rmmod nvme_fabrics 00:10:39.764 rmmod nvme_keyring 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:39.764 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:10:40.022 ************************************ 00:10:40.022 END TEST nvmf_filesystem 00:10:40.022 ************************************ 00:10:40.022 00:10:40.022 real 0m28.173s 00:10:40.022 user 1m42.666s 00:10:40.022 sys 0m4.604s 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:40.022 17:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.281 ************************************ 00:10:40.281 START TEST nvmf_target_discovery 00:10:40.281 ************************************ 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:40.281 * Looking for test storage... 00:10:40.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:40.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.281 --rc genhtml_branch_coverage=1 00:10:40.281 --rc genhtml_function_coverage=1 00:10:40.281 --rc genhtml_legend=1 00:10:40.281 --rc geninfo_all_blocks=1 00:10:40.281 --rc geninfo_unexecuted_blocks=1 00:10:40.281 00:10:40.281 ' 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:40.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.281 --rc genhtml_branch_coverage=1 00:10:40.281 --rc genhtml_function_coverage=1 00:10:40.281 --rc genhtml_legend=1 00:10:40.281 --rc geninfo_all_blocks=1 00:10:40.281 --rc geninfo_unexecuted_blocks=1 00:10:40.281 00:10:40.281 ' 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:40.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.281 --rc genhtml_branch_coverage=1 00:10:40.281 --rc genhtml_function_coverage=1 00:10:40.281 --rc genhtml_legend=1 00:10:40.281 --rc geninfo_all_blocks=1 00:10:40.281 --rc geninfo_unexecuted_blocks=1 00:10:40.281 00:10:40.281 ' 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:40.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.281 --rc genhtml_branch_coverage=1 00:10:40.281 --rc genhtml_function_coverage=1 00:10:40.281 --rc genhtml_legend=1 00:10:40.281 --rc geninfo_all_blocks=1 00:10:40.281 --rc geninfo_unexecuted_blocks=1 00:10:40.281 00:10:40.281 ' 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.281 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.282 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:40.282 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:40.540 Cannot find device "nvmf_init_br" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:40.540 Cannot find device "nvmf_init_br2" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:40.540 Cannot find device "nvmf_tgt_br" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:40.540 Cannot find device "nvmf_tgt_br2" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:40.540 Cannot find device "nvmf_init_br" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:40.540 Cannot find device "nvmf_init_br2" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:40.540 Cannot find device "nvmf_tgt_br" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:40.540 Cannot find device "nvmf_tgt_br2" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:40.540 Cannot find device "nvmf_br" 00:10:40.540 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:40.541 Cannot find device "nvmf_init_if" 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:40.541 Cannot find device "nvmf_init_if2" 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:40.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:40.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:40.541 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:40.799 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:40.800 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.800 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.120 ms 00:10:40.800 00:10:40.800 --- 10.0.0.3 ping statistics --- 00:10:40.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.800 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:40.800 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:40.800 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.093 ms 00:10:40.800 00:10:40.800 --- 10.0.0.4 ping statistics --- 00:10:40.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.800 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:40.800 00:10:40.800 --- 10.0.0.1 ping statistics --- 00:10:40.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.800 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:40.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:10:40.800 00:10:40.800 --- 10.0.0.2 ping statistics --- 00:10:40.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.800 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=72326 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 72326 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 72326 ']' 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.800 17:10:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.800 [2024-12-06 17:10:23.072514] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:10:40.800 [2024-12-06 17:10:23.073417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.058 [2024-12-06 17:10:23.226783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.058 [2024-12-06 17:10:23.275353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.058 [2024-12-06 17:10:23.275440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.058 [2024-12-06 17:10:23.275461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.058 [2024-12-06 17:10:23.275475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.058 [2024-12-06 17:10:23.275486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.058 [2024-12-06 17:10:23.276562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.058 [2024-12-06 17:10:23.276684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.058 [2024-12-06 17:10:23.276739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.058 [2024-12-06 17:10:23.276733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 [2024-12-06 17:10:23.511723] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 Null1 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 [2024-12-06 17:10:23.561618] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 Null2 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 Null3 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.317 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 Null4 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.577 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 4420 00:10:41.577 00:10:41.577 Discovery Log Number of Records 6, Generation counter 6 00:10:41.577 =====Discovery Log Entry 0====== 00:10:41.577 trtype: tcp 00:10:41.577 adrfam: ipv4 00:10:41.577 subtype: current discovery subsystem 00:10:41.577 treq: not required 00:10:41.577 portid: 0 00:10:41.577 trsvcid: 4420 00:10:41.577 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:41.577 traddr: 10.0.0.3 00:10:41.577 eflags: explicit discovery connections, duplicate discovery information 00:10:41.577 sectype: none 00:10:41.577 =====Discovery Log Entry 1====== 00:10:41.577 trtype: tcp 00:10:41.577 adrfam: ipv4 00:10:41.577 subtype: nvme subsystem 00:10:41.577 treq: not required 00:10:41.577 portid: 0 00:10:41.577 trsvcid: 4420 00:10:41.577 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:41.577 traddr: 10.0.0.3 00:10:41.577 eflags: none 00:10:41.577 sectype: none 00:10:41.577 =====Discovery Log Entry 2====== 00:10:41.577 trtype: tcp 00:10:41.577 adrfam: ipv4 00:10:41.577 subtype: nvme subsystem 00:10:41.577 treq: not required 00:10:41.577 portid: 0 00:10:41.577 trsvcid: 4420 00:10:41.577 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:41.577 traddr: 10.0.0.3 00:10:41.577 eflags: none 00:10:41.577 sectype: none 00:10:41.577 =====Discovery Log Entry 3====== 00:10:41.577 trtype: tcp 00:10:41.577 adrfam: ipv4 00:10:41.577 subtype: nvme subsystem 00:10:41.577 treq: not required 00:10:41.577 portid: 0 00:10:41.577 trsvcid: 4420 00:10:41.577 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:41.577 traddr: 10.0.0.3 00:10:41.577 eflags: none 00:10:41.577 sectype: none 00:10:41.577 =====Discovery Log Entry 4====== 00:10:41.577 trtype: tcp 00:10:41.577 adrfam: ipv4 00:10:41.577 subtype: nvme subsystem 00:10:41.577 treq: not required 00:10:41.577 portid: 0 00:10:41.577 trsvcid: 4420 00:10:41.577 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:41.577 traddr: 10.0.0.3 00:10:41.577 eflags: none 00:10:41.577 sectype: none 00:10:41.577 =====Discovery Log Entry 5====== 00:10:41.577 trtype: tcp 00:10:41.577 adrfam: ipv4 00:10:41.577 subtype: discovery subsystem referral 00:10:41.577 treq: not required 00:10:41.577 portid: 0 00:10:41.577 trsvcid: 4430 00:10:41.577 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:41.577 traddr: 10.0.0.3 00:10:41.578 eflags: none 00:10:41.578 sectype: none 00:10:41.578 Perform nvmf subsystem discovery via RPC 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.578 [ 00:10:41.578 { 00:10:41.578 "allow_any_host": true, 00:10:41.578 "hosts": [], 00:10:41.578 "listen_addresses": [ 00:10:41.578 { 00:10:41.578 "adrfam": "IPv4", 00:10:41.578 "traddr": "10.0.0.3", 00:10:41.578 "trsvcid": "4420", 00:10:41.578 "trtype": "TCP" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:41.578 "subtype": "Discovery" 00:10:41.578 }, 00:10:41.578 { 00:10:41.578 "allow_any_host": true, 00:10:41.578 "hosts": [], 00:10:41.578 "listen_addresses": [ 00:10:41.578 { 00:10:41.578 "adrfam": "IPv4", 00:10:41.578 "traddr": "10.0.0.3", 00:10:41.578 "trsvcid": "4420", 00:10:41.578 "trtype": "TCP" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "max_cntlid": 65519, 00:10:41.578 "max_namespaces": 32, 00:10:41.578 "min_cntlid": 1, 00:10:41.578 "model_number": "SPDK bdev Controller", 00:10:41.578 "namespaces": [ 00:10:41.578 { 00:10:41.578 "bdev_name": "Null1", 00:10:41.578 "name": "Null1", 00:10:41.578 "nguid": "C8FF7A41BAFB4085A88C4545724B1288", 00:10:41.578 "nsid": 1, 00:10:41.578 "uuid": "c8ff7a41-bafb-4085-a88c-4545724b1288" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.578 "serial_number": "SPDK00000000000001", 00:10:41.578 "subtype": "NVMe" 00:10:41.578 }, 00:10:41.578 { 00:10:41.578 "allow_any_host": true, 00:10:41.578 "hosts": [], 00:10:41.578 "listen_addresses": [ 00:10:41.578 { 00:10:41.578 "adrfam": "IPv4", 00:10:41.578 "traddr": "10.0.0.3", 00:10:41.578 "trsvcid": "4420", 00:10:41.578 "trtype": "TCP" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "max_cntlid": 65519, 00:10:41.578 "max_namespaces": 32, 00:10:41.578 "min_cntlid": 1, 00:10:41.578 "model_number": "SPDK bdev Controller", 00:10:41.578 "namespaces": [ 00:10:41.578 { 00:10:41.578 "bdev_name": "Null2", 00:10:41.578 "name": "Null2", 00:10:41.578 "nguid": "3C855966672C4722A5DB31FC86BEBE2C", 00:10:41.578 "nsid": 1, 00:10:41.578 "uuid": "3c855966-672c-4722-a5db-31fc86bebe2c" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:41.578 "serial_number": "SPDK00000000000002", 00:10:41.578 "subtype": "NVMe" 00:10:41.578 }, 00:10:41.578 { 00:10:41.578 "allow_any_host": true, 00:10:41.578 "hosts": [], 00:10:41.578 "listen_addresses": [ 00:10:41.578 { 00:10:41.578 "adrfam": "IPv4", 00:10:41.578 "traddr": "10.0.0.3", 00:10:41.578 "trsvcid": "4420", 00:10:41.578 "trtype": "TCP" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "max_cntlid": 65519, 00:10:41.578 "max_namespaces": 32, 00:10:41.578 "min_cntlid": 1, 00:10:41.578 "model_number": "SPDK bdev Controller", 00:10:41.578 "namespaces": [ 00:10:41.578 { 00:10:41.578 "bdev_name": "Null3", 00:10:41.578 "name": "Null3", 00:10:41.578 "nguid": "3192FA226A7F4B8AA5F0325232CCF811", 00:10:41.578 "nsid": 1, 00:10:41.578 "uuid": "3192fa22-6a7f-4b8a-a5f0-325232ccf811" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:41.578 "serial_number": "SPDK00000000000003", 00:10:41.578 "subtype": "NVMe" 00:10:41.578 }, 00:10:41.578 { 00:10:41.578 "allow_any_host": true, 00:10:41.578 "hosts": [], 00:10:41.578 "listen_addresses": [ 00:10:41.578 { 00:10:41.578 "adrfam": "IPv4", 00:10:41.578 "traddr": "10.0.0.3", 00:10:41.578 "trsvcid": "4420", 00:10:41.578 "trtype": "TCP" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "max_cntlid": 65519, 00:10:41.578 "max_namespaces": 32, 00:10:41.578 "min_cntlid": 1, 00:10:41.578 "model_number": "SPDK bdev Controller", 00:10:41.578 "namespaces": [ 00:10:41.578 { 00:10:41.578 "bdev_name": "Null4", 00:10:41.578 "name": "Null4", 00:10:41.578 "nguid": "7BE4FF69FCB8406A8DA9393F7F0B892D", 00:10:41.578 "nsid": 1, 00:10:41.578 "uuid": "7be4ff69-fcb8-406a-8da9-393f7f0b892d" 00:10:41.578 } 00:10:41.578 ], 00:10:41.578 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:41.578 "serial_number": "SPDK00000000000004", 00:10:41.578 "subtype": "NVMe" 00:10:41.578 } 00:10:41.578 ] 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.578 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.837 17:10:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.837 rmmod nvme_tcp 00:10:41.837 rmmod nvme_fabrics 00:10:41.837 rmmod nvme_keyring 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 72326 ']' 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 72326 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 72326 ']' 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 72326 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72326 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.837 killing process with pid 72326 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72326' 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 72326 00:10:41.837 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 72326 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:42.095 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:10:42.353 00:10:42.353 real 0m2.146s 00:10:42.353 user 0m4.186s 00:10:42.353 sys 0m0.674s 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:42.353 ************************************ 00:10:42.353 END TEST nvmf_target_discovery 00:10:42.353 ************************************ 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:42.353 ************************************ 00:10:42.353 START TEST nvmf_referrals 00:10:42.353 ************************************ 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:42.353 * Looking for test storage... 00:10:42.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:42.353 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.612 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:42.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.613 --rc genhtml_branch_coverage=1 00:10:42.613 --rc genhtml_function_coverage=1 00:10:42.613 --rc genhtml_legend=1 00:10:42.613 --rc geninfo_all_blocks=1 00:10:42.613 --rc geninfo_unexecuted_blocks=1 00:10:42.613 00:10:42.613 ' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:42.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.613 --rc genhtml_branch_coverage=1 00:10:42.613 --rc genhtml_function_coverage=1 00:10:42.613 --rc genhtml_legend=1 00:10:42.613 --rc geninfo_all_blocks=1 00:10:42.613 --rc geninfo_unexecuted_blocks=1 00:10:42.613 00:10:42.613 ' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:42.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.613 --rc genhtml_branch_coverage=1 00:10:42.613 --rc genhtml_function_coverage=1 00:10:42.613 --rc genhtml_legend=1 00:10:42.613 --rc geninfo_all_blocks=1 00:10:42.613 --rc geninfo_unexecuted_blocks=1 00:10:42.613 00:10:42.613 ' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:42.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.613 --rc genhtml_branch_coverage=1 00:10:42.613 --rc genhtml_function_coverage=1 00:10:42.613 --rc genhtml_legend=1 00:10:42.613 --rc geninfo_all_blocks=1 00:10:42.613 --rc geninfo_unexecuted_blocks=1 00:10:42.613 00:10:42.613 ' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:42.613 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.613 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:42.614 Cannot find device "nvmf_init_br" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:42.614 Cannot find device "nvmf_init_br2" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:42.614 Cannot find device "nvmf_tgt_br" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:42.614 Cannot find device "nvmf_tgt_br2" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:42.614 Cannot find device "nvmf_init_br" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:42.614 Cannot find device "nvmf_init_br2" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:42.614 Cannot find device "nvmf_tgt_br" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:42.614 Cannot find device "nvmf_tgt_br2" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:42.614 Cannot find device "nvmf_br" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:42.614 Cannot find device "nvmf_init_if" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:42.614 Cannot find device "nvmf_init_if2" 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:42.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:42.614 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:42.614 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:42.872 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:42.872 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:42.872 17:10:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:42.872 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:42.872 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:10:42.872 00:10:42.872 --- 10.0.0.3 ping statistics --- 00:10:42.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.872 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:42.872 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:42.872 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:10:42.872 00:10:42.872 --- 10.0.0.4 ping statistics --- 00:10:42.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.872 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:42.872 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:43.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:10:43.130 00:10:43.130 --- 10.0.0.1 ping statistics --- 00:10:43.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.130 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:43.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:10:43.130 00:10:43.130 --- 10.0.0.2 ping statistics --- 00:10:43.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.130 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72592 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72592 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72592 ']' 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.130 17:10:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.130 [2024-12-06 17:10:25.288302] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:10:43.130 [2024-12-06 17:10:25.288434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.387 [2024-12-06 17:10:25.440997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.387 [2024-12-06 17:10:25.489322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.387 [2024-12-06 17:10:25.489410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.387 [2024-12-06 17:10:25.489433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.387 [2024-12-06 17:10:25.489447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.387 [2024-12-06 17:10:25.489459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.387 [2024-12-06 17:10:25.490520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.387 [2024-12-06 17:10:25.490593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.387 [2024-12-06 17:10:25.490670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.387 [2024-12-06 17:10:25.490680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 [2024-12-06 17:10:26.503406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 [2024-12-06 17:10:26.516417] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.320 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:44.578 17:10:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:44.861 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:44.861 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:44.861 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:44.862 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.122 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:45.379 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:45.380 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:45.380 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:45.380 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:45.380 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -a 10.0.0.3 -s 8009 -o json 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:45.638 17:10:27 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.897 rmmod nvme_tcp 00:10:45.897 rmmod nvme_fabrics 00:10:45.897 rmmod nvme_keyring 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72592 ']' 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72592 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72592 ']' 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72592 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72592 00:10:45.897 killing process with pid 72592 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72592' 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72592 00:10:45.897 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72592 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:46.156 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:46.414 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:46.414 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:10:46.415 00:10:46.415 real 0m4.061s 00:10:46.415 user 0m12.843s 00:10:46.415 sys 0m0.973s 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.415 ************************************ 00:10:46.415 END TEST nvmf_referrals 00:10:46.415 ************************************ 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.415 ************************************ 00:10:46.415 START TEST nvmf_connect_disconnect 00:10:46.415 ************************************ 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:46.415 * Looking for test storage... 00:10:46.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:46.415 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:46.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.675 --rc genhtml_branch_coverage=1 00:10:46.675 --rc genhtml_function_coverage=1 00:10:46.675 --rc genhtml_legend=1 00:10:46.675 --rc geninfo_all_blocks=1 00:10:46.675 --rc geninfo_unexecuted_blocks=1 00:10:46.675 00:10:46.675 ' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:46.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.675 --rc genhtml_branch_coverage=1 00:10:46.675 --rc genhtml_function_coverage=1 00:10:46.675 --rc genhtml_legend=1 00:10:46.675 --rc geninfo_all_blocks=1 00:10:46.675 --rc geninfo_unexecuted_blocks=1 00:10:46.675 00:10:46.675 ' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:46.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.675 --rc genhtml_branch_coverage=1 00:10:46.675 --rc genhtml_function_coverage=1 00:10:46.675 --rc genhtml_legend=1 00:10:46.675 --rc geninfo_all_blocks=1 00:10:46.675 --rc geninfo_unexecuted_blocks=1 00:10:46.675 00:10:46.675 ' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:46.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.675 --rc genhtml_branch_coverage=1 00:10:46.675 --rc genhtml_function_coverage=1 00:10:46.675 --rc genhtml_legend=1 00:10:46.675 --rc geninfo_all_blocks=1 00:10:46.675 --rc geninfo_unexecuted_blocks=1 00:10:46.675 00:10:46.675 ' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.675 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:46.676 Cannot find device "nvmf_init_br" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:46.676 Cannot find device "nvmf_init_br2" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:46.676 Cannot find device "nvmf_tgt_br" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.676 Cannot find device "nvmf_tgt_br2" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:46.676 Cannot find device "nvmf_init_br" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:46.676 Cannot find device "nvmf_init_br2" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:46.676 Cannot find device "nvmf_tgt_br" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:46.676 Cannot find device "nvmf_tgt_br2" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:46.676 Cannot find device "nvmf_br" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:46.676 Cannot find device "nvmf_init_if" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:46.676 Cannot find device "nvmf_init_if2" 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:10:46.676 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.676 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.935 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:10:46.935 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.935 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:10:46.935 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.935 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.935 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:46.935 17:10:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:46.935 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:47.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:47.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:10:47.194 00:10:47.194 --- 10.0.0.3 ping statistics --- 00:10:47.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.194 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:47.194 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:47.194 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.075 ms 00:10:47.194 00:10:47.194 --- 10.0.0.4 ping statistics --- 00:10:47.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.194 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:47.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:10:47.194 00:10:47.194 --- 10.0.0.1 ping statistics --- 00:10:47.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.194 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:47.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:10:47.194 00:10:47.194 --- 10.0.0.2 ping statistics --- 00:10:47.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.194 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=72951 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 72951 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 72951 ']' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.194 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.194 [2024-12-06 17:10:29.373894] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:10:47.194 [2024-12-06 17:10:29.374032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.453 [2024-12-06 17:10:29.528424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.453 [2024-12-06 17:10:29.577154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.453 [2024-12-06 17:10:29.577246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.453 [2024-12-06 17:10:29.577288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.453 [2024-12-06 17:10:29.577306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.453 [2024-12-06 17:10:29.577319] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.453 [2024-12-06 17:10:29.578382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.453 [2024-12-06 17:10:29.578449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.453 [2024-12-06 17:10:29.578532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.453 [2024-12-06 17:10:29.578542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.713 [2024-12-06 17:10:29.826387] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:47.713 [2024-12-06 17:10:29.902025] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:47.713 17:10:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:50.243 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.136 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:59.136 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:59.136 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:59.137 rmmod nvme_tcp 00:10:59.137 rmmod nvme_fabrics 00:10:59.137 rmmod nvme_keyring 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 72951 ']' 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 72951 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 72951 ']' 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 72951 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72951 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72951' 00:10:59.137 killing process with pid 72951 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 72951 00:10:59.137 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 72951 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:10:59.414 00:10:59.414 real 0m13.046s 00:10:59.414 user 0m46.355s 00:10:59.414 sys 0m1.990s 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:59.414 ************************************ 00:10:59.414 END TEST nvmf_connect_disconnect 00:10:59.414 ************************************ 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:59.414 ************************************ 00:10:59.414 START TEST nvmf_multitarget 00:10:59.414 ************************************ 00:10:59.414 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:59.675 * Looking for test storage... 00:10:59.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.675 --rc genhtml_branch_coverage=1 00:10:59.675 --rc genhtml_function_coverage=1 00:10:59.675 --rc genhtml_legend=1 00:10:59.675 --rc geninfo_all_blocks=1 00:10:59.675 --rc geninfo_unexecuted_blocks=1 00:10:59.675 00:10:59.675 ' 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.675 --rc genhtml_branch_coverage=1 00:10:59.675 --rc genhtml_function_coverage=1 00:10:59.675 --rc genhtml_legend=1 00:10:59.675 --rc geninfo_all_blocks=1 00:10:59.675 --rc geninfo_unexecuted_blocks=1 00:10:59.675 00:10:59.675 ' 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.675 --rc genhtml_branch_coverage=1 00:10:59.675 --rc genhtml_function_coverage=1 00:10:59.675 --rc genhtml_legend=1 00:10:59.675 --rc geninfo_all_blocks=1 00:10:59.675 --rc geninfo_unexecuted_blocks=1 00:10:59.675 00:10:59.675 ' 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:59.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.675 --rc genhtml_branch_coverage=1 00:10:59.675 --rc genhtml_function_coverage=1 00:10:59.675 --rc genhtml_legend=1 00:10:59.675 --rc geninfo_all_blocks=1 00:10:59.675 --rc geninfo_unexecuted_blocks=1 00:10:59.675 00:10:59.675 ' 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.675 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.676 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:59.676 Cannot find device "nvmf_init_br" 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:59.676 Cannot find device "nvmf_init_br2" 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:59.676 Cannot find device "nvmf_tgt_br" 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.676 Cannot find device "nvmf_tgt_br2" 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:10:59.676 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:59.936 Cannot find device "nvmf_init_br" 00:10:59.936 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:10:59.936 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:59.936 Cannot find device "nvmf_init_br2" 00:10:59.936 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:10:59.936 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:59.936 Cannot find device "nvmf_tgt_br" 00:10:59.936 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:10:59.936 17:10:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:59.936 Cannot find device "nvmf_tgt_br2" 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:59.936 Cannot find device "nvmf_br" 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:59.936 Cannot find device "nvmf_init_if" 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:59.936 Cannot find device "nvmf_init_if2" 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.936 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:59.936 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:00.195 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:00.195 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:00.195 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:00.195 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:00.195 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:00.195 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:00.196 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:00.196 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:11:00.196 00:11:00.196 --- 10.0.0.3 ping statistics --- 00:11:00.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.196 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:00.196 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:00.196 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:11:00.196 00:11:00.196 --- 10.0.0.4 ping statistics --- 00:11:00.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.196 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:00.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:00.196 00:11:00.196 --- 10.0.0.1 ping statistics --- 00:11:00.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.196 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:00.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:11:00.196 00:11:00.196 --- 10.0.0.2 ping statistics --- 00:11:00.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.196 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=73392 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 73392 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 73392 ']' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.196 17:10:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:00.196 [2024-12-06 17:10:42.420811] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:11:00.196 [2024-12-06 17:10:42.420944] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.455 [2024-12-06 17:10:42.573042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.455 [2024-12-06 17:10:42.606042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.455 [2024-12-06 17:10:42.606101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.455 [2024-12-06 17:10:42.606113] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.455 [2024-12-06 17:10:42.606122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.455 [2024-12-06 17:10:42.606130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.455 [2024-12-06 17:10:42.606903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.455 [2024-12-06 17:10:42.607060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.455 [2024-12-06 17:10:42.606981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.455 [2024-12-06 17:10:42.607061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:01.391 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:01.650 "nvmf_tgt_1" 00:11:01.650 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:01.650 "nvmf_tgt_2" 00:11:01.650 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:01.650 17:10:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:01.908 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:01.908 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:02.166 true 00:11:02.166 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:02.166 true 00:11:02.166 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:02.166 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.425 rmmod nvme_tcp 00:11:02.425 rmmod nvme_fabrics 00:11:02.425 rmmod nvme_keyring 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 73392 ']' 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 73392 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 73392 ']' 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 73392 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73392 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.425 killing process with pid 73392 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73392' 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 73392 00:11:02.425 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 73392 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.684 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.941 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:02.941 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.941 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.941 17:10:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:11:02.941 00:11:02.941 real 0m3.306s 00:11:02.941 user 0m10.362s 00:11:02.941 sys 0m0.743s 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:02.941 ************************************ 00:11:02.941 END TEST nvmf_multitarget 00:11:02.941 ************************************ 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.941 ************************************ 00:11:02.941 START TEST nvmf_rpc 00:11:02.941 ************************************ 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:02.941 * Looking for test storage... 00:11:02.941 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.941 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:03.200 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:03.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.201 --rc genhtml_branch_coverage=1 00:11:03.201 --rc genhtml_function_coverage=1 00:11:03.201 --rc genhtml_legend=1 00:11:03.201 --rc geninfo_all_blocks=1 00:11:03.201 --rc geninfo_unexecuted_blocks=1 00:11:03.201 00:11:03.201 ' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:03.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.201 --rc genhtml_branch_coverage=1 00:11:03.201 --rc genhtml_function_coverage=1 00:11:03.201 --rc genhtml_legend=1 00:11:03.201 --rc geninfo_all_blocks=1 00:11:03.201 --rc geninfo_unexecuted_blocks=1 00:11:03.201 00:11:03.201 ' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:03.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.201 --rc genhtml_branch_coverage=1 00:11:03.201 --rc genhtml_function_coverage=1 00:11:03.201 --rc genhtml_legend=1 00:11:03.201 --rc geninfo_all_blocks=1 00:11:03.201 --rc geninfo_unexecuted_blocks=1 00:11:03.201 00:11:03.201 ' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:03.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.201 --rc genhtml_branch_coverage=1 00:11:03.201 --rc genhtml_function_coverage=1 00:11:03.201 --rc genhtml_legend=1 00:11:03.201 --rc geninfo_all_blocks=1 00:11:03.201 --rc geninfo_unexecuted_blocks=1 00:11:03.201 00:11:03.201 ' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.201 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:03.201 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:03.202 Cannot find device "nvmf_init_br" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:03.202 Cannot find device "nvmf_init_br2" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:03.202 Cannot find device "nvmf_tgt_br" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.202 Cannot find device "nvmf_tgt_br2" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:03.202 Cannot find device "nvmf_init_br" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:03.202 Cannot find device "nvmf_init_br2" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:03.202 Cannot find device "nvmf_tgt_br" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:03.202 Cannot find device "nvmf_tgt_br2" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:03.202 Cannot find device "nvmf_br" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:03.202 Cannot find device "nvmf_init_if" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:03.202 Cannot find device "nvmf_init_if2" 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.202 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:03.202 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:03.460 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.460 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:11:03.460 00:11:03.460 --- 10.0.0.3 ping statistics --- 00:11:03.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.460 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:03.460 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:03.460 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:03.460 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:11:03.460 00:11:03.460 --- 10.0.0.4 ping statistics --- 00:11:03.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.460 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:11:03.461 00:11:03.461 --- 10.0.0.1 ping statistics --- 00:11:03.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.461 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:03.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:11:03.461 00:11:03.461 --- 10.0.0.2 ping statistics --- 00:11:03.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.461 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=73673 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 73673 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 73673 ']' 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.461 17:10:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.719 [2024-12-06 17:10:45.797197] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:11:03.719 [2024-12-06 17:10:45.797314] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.719 [2024-12-06 17:10:45.948375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.719 [2024-12-06 17:10:45.981747] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.719 [2024-12-06 17:10:45.981826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.719 [2024-12-06 17:10:45.981846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.719 [2024-12-06 17:10:45.981859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.719 [2024-12-06 17:10:45.981870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.719 [2024-12-06 17:10:45.982774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.719 [2024-12-06 17:10:45.982813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.719 [2024-12-06 17:10:45.982897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.719 [2024-12-06 17:10:45.982904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.978 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:03.979 "poll_groups": [ 00:11:03.979 { 00:11:03.979 "admin_qpairs": 0, 00:11:03.979 "completed_nvme_io": 0, 00:11:03.979 "current_admin_qpairs": 0, 00:11:03.979 "current_io_qpairs": 0, 00:11:03.979 "io_qpairs": 0, 00:11:03.979 "name": "nvmf_tgt_poll_group_000", 00:11:03.979 "pending_bdev_io": 0, 00:11:03.979 "transports": [] 00:11:03.979 }, 00:11:03.979 { 00:11:03.979 "admin_qpairs": 0, 00:11:03.979 "completed_nvme_io": 0, 00:11:03.979 "current_admin_qpairs": 0, 00:11:03.979 "current_io_qpairs": 0, 00:11:03.979 "io_qpairs": 0, 00:11:03.979 "name": "nvmf_tgt_poll_group_001", 00:11:03.979 "pending_bdev_io": 0, 00:11:03.979 "transports": [] 00:11:03.979 }, 00:11:03.979 { 00:11:03.979 "admin_qpairs": 0, 00:11:03.979 "completed_nvme_io": 0, 00:11:03.979 "current_admin_qpairs": 0, 00:11:03.979 "current_io_qpairs": 0, 00:11:03.979 "io_qpairs": 0, 00:11:03.979 "name": "nvmf_tgt_poll_group_002", 00:11:03.979 "pending_bdev_io": 0, 00:11:03.979 "transports": [] 00:11:03.979 }, 00:11:03.979 { 00:11:03.979 "admin_qpairs": 0, 00:11:03.979 "completed_nvme_io": 0, 00:11:03.979 "current_admin_qpairs": 0, 00:11:03.979 "current_io_qpairs": 0, 00:11:03.979 "io_qpairs": 0, 00:11:03.979 "name": "nvmf_tgt_poll_group_003", 00:11:03.979 "pending_bdev_io": 0, 00:11:03.979 "transports": [] 00:11:03.979 } 00:11:03.979 ], 00:11:03.979 "tick_rate": 2200000000 00:11:03.979 }' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.979 [2024-12-06 17:10:46.234476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:03.979 "poll_groups": [ 00:11:03.979 { 00:11:03.979 "admin_qpairs": 0, 00:11:03.979 "completed_nvme_io": 0, 00:11:03.979 "current_admin_qpairs": 0, 00:11:03.979 "current_io_qpairs": 0, 00:11:03.979 "io_qpairs": 0, 00:11:03.979 "name": "nvmf_tgt_poll_group_000", 00:11:03.979 "pending_bdev_io": 0, 00:11:03.979 "transports": [ 00:11:03.979 { 00:11:03.979 "trtype": "TCP" 00:11:03.979 } 00:11:03.979 ] 00:11:03.979 }, 00:11:03.979 { 00:11:03.979 "admin_qpairs": 0, 00:11:03.979 "completed_nvme_io": 0, 00:11:03.979 "current_admin_qpairs": 0, 00:11:03.979 "current_io_qpairs": 0, 00:11:03.979 "io_qpairs": 0, 00:11:03.979 "name": "nvmf_tgt_poll_group_001", 00:11:03.979 "pending_bdev_io": 0, 00:11:03.979 "transports": [ 00:11:03.979 { 00:11:03.979 "trtype": "TCP" 00:11:03.979 } 00:11:03.979 ] 00:11:03.979 }, 00:11:03.979 { 00:11:03.979 "admin_qpairs": 0, 00:11:03.979 "completed_nvme_io": 0, 00:11:03.979 "current_admin_qpairs": 0, 00:11:03.979 "current_io_qpairs": 0, 00:11:03.979 "io_qpairs": 0, 00:11:03.979 "name": "nvmf_tgt_poll_group_002", 00:11:03.979 "pending_bdev_io": 0, 00:11:03.979 "transports": [ 00:11:03.979 { 00:11:03.979 "trtype": "TCP" 00:11:03.979 } 00:11:03.979 ] 00:11:03.979 }, 00:11:03.979 { 00:11:03.979 "admin_qpairs": 0, 00:11:03.979 "completed_nvme_io": 0, 00:11:03.979 "current_admin_qpairs": 0, 00:11:03.979 "current_io_qpairs": 0, 00:11:03.979 "io_qpairs": 0, 00:11:03.979 "name": "nvmf_tgt_poll_group_003", 00:11:03.979 "pending_bdev_io": 0, 00:11:03.979 "transports": [ 00:11:03.979 { 00:11:03.979 "trtype": "TCP" 00:11:03.979 } 00:11:03.979 ] 00:11:03.979 } 00:11:03.979 ], 00:11:03.979 "tick_rate": 2200000000 00:11:03.979 }' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:03.979 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.239 Malloc1 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.239 [2024-12-06 17:10:46.431392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -a 10.0.0.3 -s 4420 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -a 10.0.0.3 -s 4420 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -a 10.0.0.3 -s 4420 00:11:04.239 [2024-12-06 17:10:46.461752] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56' 00:11:04.239 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:04.239 could not add new controller: failed to write to nvme-fabrics device 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.239 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:04.497 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.497 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.497 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.497 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.497 17:10:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.399 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.399 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.399 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.399 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.399 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.399 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:06.399 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:06.657 [2024-12-06 17:10:48.772707] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56' 00:11:06.657 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:06.657 could not add new controller: failed to write to nvme-fabrics device 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.657 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:06.916 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.916 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.916 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.916 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:06.916 17:10:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:08.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:08.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:08.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:08.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:08.818 17:10:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 [2024-12-06 17:10:51.073936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.818 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:09.088 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.088 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:09.088 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.088 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:09.088 17:10:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:11.015 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:11.015 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.015 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:11.015 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:11.015 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.015 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:11.015 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:11.275 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.276 [2024-12-06 17:10:53.381103] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:11.276 17:10:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.806 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.807 [2024-12-06 17:10:55.696438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:13.807 17:10:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:15.711 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:15.711 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:15.711 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.711 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:15.711 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.711 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:15.711 17:10:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.970 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.970 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:15.970 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:15.970 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.970 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:15.970 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.970 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:15.970 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 [2024-12-06 17:10:58.100017] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.971 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:16.230 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.230 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.230 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.230 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:16.230 17:10:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.136 [2024-12-06 17:11:00.415369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.136 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.395 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.395 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:11:18.395 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.395 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:18.395 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.395 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:18.395 17:11:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 [2024-12-06 17:11:02.734592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 [2024-12-06 17:11:02.782606] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.932 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 [2024-12-06 17:11:02.830634] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 [2024-12-06 17:11:02.878660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 [2024-12-06 17:11:02.934727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.933 17:11:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.933 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.933 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:20.933 "poll_groups": [ 00:11:20.933 { 00:11:20.933 "admin_qpairs": 2, 00:11:20.933 "completed_nvme_io": 65, 00:11:20.933 "current_admin_qpairs": 0, 00:11:20.933 "current_io_qpairs": 0, 00:11:20.933 "io_qpairs": 16, 00:11:20.933 "name": "nvmf_tgt_poll_group_000", 00:11:20.933 "pending_bdev_io": 0, 00:11:20.933 "transports": [ 00:11:20.933 { 00:11:20.933 "trtype": "TCP" 00:11:20.933 } 00:11:20.933 ] 00:11:20.933 }, 00:11:20.933 { 00:11:20.933 "admin_qpairs": 3, 00:11:20.933 "completed_nvme_io": 118, 00:11:20.933 "current_admin_qpairs": 0, 00:11:20.933 "current_io_qpairs": 0, 00:11:20.933 "io_qpairs": 17, 00:11:20.933 "name": "nvmf_tgt_poll_group_001", 00:11:20.933 "pending_bdev_io": 0, 00:11:20.933 "transports": [ 00:11:20.933 { 00:11:20.933 "trtype": "TCP" 00:11:20.933 } 00:11:20.933 ] 00:11:20.933 }, 00:11:20.933 { 00:11:20.933 "admin_qpairs": 1, 00:11:20.933 "completed_nvme_io": 119, 00:11:20.933 "current_admin_qpairs": 0, 00:11:20.933 "current_io_qpairs": 0, 00:11:20.933 "io_qpairs": 19, 00:11:20.933 "name": "nvmf_tgt_poll_group_002", 00:11:20.933 "pending_bdev_io": 0, 00:11:20.933 "transports": [ 00:11:20.933 { 00:11:20.933 "trtype": "TCP" 00:11:20.933 } 00:11:20.933 ] 00:11:20.933 }, 00:11:20.933 { 00:11:20.933 "admin_qpairs": 1, 00:11:20.933 "completed_nvme_io": 118, 00:11:20.933 "current_admin_qpairs": 0, 00:11:20.933 "current_io_qpairs": 0, 00:11:20.933 "io_qpairs": 18, 00:11:20.933 "name": "nvmf_tgt_poll_group_003", 00:11:20.933 "pending_bdev_io": 0, 00:11:20.933 "transports": [ 00:11:20.933 { 00:11:20.933 "trtype": "TCP" 00:11:20.933 } 00:11:20.933 ] 00:11:20.933 } 00:11:20.933 ], 00:11:20.933 "tick_rate": 2200000000 00:11:20.933 }' 00:11:20.933 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:20.933 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.934 rmmod nvme_tcp 00:11:20.934 rmmod nvme_fabrics 00:11:20.934 rmmod nvme_keyring 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 73673 ']' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 73673 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 73673 ']' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 73673 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73673 00:11:20.934 killing process with pid 73673 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73673' 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 73673 00:11:20.934 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 73673 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:21.193 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:21.194 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:21.194 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.194 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:21.194 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:21.194 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:21.194 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:21.194 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:21.194 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:11:21.453 00:11:21.453 real 0m18.558s 00:11:21.453 user 1m8.329s 00:11:21.453 sys 0m2.611s 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.453 ************************************ 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.453 END TEST nvmf_rpc 00:11:21.453 ************************************ 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.453 ************************************ 00:11:21.453 START TEST nvmf_invalid 00:11:21.453 ************************************ 00:11:21.453 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:21.453 * Looking for test storage... 00:11:21.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:21.713 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:21.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.714 --rc genhtml_branch_coverage=1 00:11:21.714 --rc genhtml_function_coverage=1 00:11:21.714 --rc genhtml_legend=1 00:11:21.714 --rc geninfo_all_blocks=1 00:11:21.714 --rc geninfo_unexecuted_blocks=1 00:11:21.714 00:11:21.714 ' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:21.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.714 --rc genhtml_branch_coverage=1 00:11:21.714 --rc genhtml_function_coverage=1 00:11:21.714 --rc genhtml_legend=1 00:11:21.714 --rc geninfo_all_blocks=1 00:11:21.714 --rc geninfo_unexecuted_blocks=1 00:11:21.714 00:11:21.714 ' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:21.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.714 --rc genhtml_branch_coverage=1 00:11:21.714 --rc genhtml_function_coverage=1 00:11:21.714 --rc genhtml_legend=1 00:11:21.714 --rc geninfo_all_blocks=1 00:11:21.714 --rc geninfo_unexecuted_blocks=1 00:11:21.714 00:11:21.714 ' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:21.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.714 --rc genhtml_branch_coverage=1 00:11:21.714 --rc genhtml_function_coverage=1 00:11:21.714 --rc genhtml_legend=1 00:11:21.714 --rc geninfo_all_blocks=1 00:11:21.714 --rc geninfo_unexecuted_blocks=1 00:11:21.714 00:11:21.714 ' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.714 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:21.714 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:21.715 Cannot find device "nvmf_init_br" 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:21.715 Cannot find device "nvmf_init_br2" 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:21.715 Cannot find device "nvmf_tgt_br" 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.715 Cannot find device "nvmf_tgt_br2" 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:21.715 Cannot find device "nvmf_init_br" 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:21.715 Cannot find device "nvmf_init_br2" 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:21.715 Cannot find device "nvmf_tgt_br" 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:21.715 Cannot find device "nvmf_tgt_br2" 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:11:21.715 17:11:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:21.974 Cannot find device "nvmf_br" 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:21.974 Cannot find device "nvmf_init_if" 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:21.974 Cannot find device "nvmf_init_if2" 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:21.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:21.974 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:21.974 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:11:21.974 00:11:21.974 --- 10.0.0.3 ping statistics --- 00:11:21.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.974 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:21.974 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:21.974 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:11:21.974 00:11:21.974 --- 10.0.0.4 ping statistics --- 00:11:21.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.974 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:21.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:21.974 00:11:21.974 --- 10.0.0.1 ping statistics --- 00:11:21.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.974 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:21.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:11:21.974 00:11:21.974 --- 10.0.0.2 ping statistics --- 00:11:21.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.974 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.974 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.975 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.975 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.975 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=74221 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 74221 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 74221 ']' 00:11:22.234 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.235 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.235 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.235 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.235 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:22.235 [2024-12-06 17:11:04.346737] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:11:22.235 [2024-12-06 17:11:04.346846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.235 [2024-12-06 17:11:04.500483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.494 [2024-12-06 17:11:04.533044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.494 [2024-12-06 17:11:04.533110] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.494 [2024-12-06 17:11:04.533138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.494 [2024-12-06 17:11:04.533145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.494 [2024-12-06 17:11:04.533152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.494 [2024-12-06 17:11:04.534066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.494 [2024-12-06 17:11:04.534108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.494 [2024-12-06 17:11:04.534311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.494 [2024-12-06 17:11:04.534328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.494 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:22.494 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:22.494 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:22.494 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.494 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:22.494 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.494 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:22.494 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9125 00:11:22.754 [2024-12-06 17:11:04.931887] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:22.755 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/12/06 17:11:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9125 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:22.755 request: 00:11:22.755 { 00:11:22.755 "method": "nvmf_create_subsystem", 00:11:22.755 "params": { 00:11:22.755 "nqn": "nqn.2016-06.io.spdk:cnode9125", 00:11:22.755 "tgt_name": "foobar" 00:11:22.755 } 00:11:22.755 } 00:11:22.755 Got JSON-RPC error response 00:11:22.755 GoRPCClient: error on JSON-RPC call' 00:11:22.755 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/12/06 17:11:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode9125 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:22.755 request: 00:11:22.755 { 00:11:22.755 "method": "nvmf_create_subsystem", 00:11:22.755 "params": { 00:11:22.755 "nqn": "nqn.2016-06.io.spdk:cnode9125", 00:11:22.755 "tgt_name": "foobar" 00:11:22.755 } 00:11:22.755 } 00:11:22.755 Got JSON-RPC error response 00:11:22.755 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:22.755 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:22.755 17:11:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19728 00:11:23.017 [2024-12-06 17:11:05.244196] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19728: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:23.017 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/12/06 17:11:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19728 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:23.017 request: 00:11:23.017 { 00:11:23.017 "method": "nvmf_create_subsystem", 00:11:23.018 "params": { 00:11:23.018 "nqn": "nqn.2016-06.io.spdk:cnode19728", 00:11:23.018 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:23.018 } 00:11:23.018 } 00:11:23.018 Got JSON-RPC error response 00:11:23.018 GoRPCClient: error on JSON-RPC call' 00:11:23.018 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/12/06 17:11:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19728 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:23.018 request: 00:11:23.018 { 00:11:23.018 "method": "nvmf_create_subsystem", 00:11:23.018 "params": { 00:11:23.018 "nqn": "nqn.2016-06.io.spdk:cnode19728", 00:11:23.018 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:23.018 } 00:11:23.018 } 00:11:23.018 Got JSON-RPC error response 00:11:23.018 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:23.018 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:23.018 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15577 00:11:23.277 [2024-12-06 17:11:05.560475] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15577: invalid model number 'SPDK_Controller' 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/12/06 17:11:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode15577], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:23.536 request: 00:11:23.536 { 00:11:23.536 "method": "nvmf_create_subsystem", 00:11:23.536 "params": { 00:11:23.536 "nqn": "nqn.2016-06.io.spdk:cnode15577", 00:11:23.536 "model_number": "SPDK_Controller\u001f" 00:11:23.536 } 00:11:23.536 } 00:11:23.536 Got JSON-RPC error response 00:11:23.536 GoRPCClient: error on JSON-RPC call' 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/12/06 17:11:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode15577], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:23.536 request: 00:11:23.536 { 00:11:23.536 "method": "nvmf_create_subsystem", 00:11:23.536 "params": { 00:11:23.536 "nqn": "nqn.2016-06.io.spdk:cnode15577", 00:11:23.536 "model_number": "SPDK_Controller\u001f" 00:11:23.536 } 00:11:23.536 } 00:11:23.536 Got JSON-RPC error response 00:11:23.536 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:23.536 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'xm(C7m5Fbtze@|E^))zY(' 00:11:23.537 17:11:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'xm(C7m5Fbtze@|E^))zY(' nqn.2016-06.io.spdk:cnode12893 00:11:23.798 [2024-12-06 17:11:05.984801] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12893: invalid serial number 'xm(C7m5Fbtze@|E^))zY(' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/12/06 17:11:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12893 serial_number:xm(C7m5Fbtze@|E^))zY(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN xm(C7m5Fbtze@|E^))zY( 00:11:23.798 request: 00:11:23.798 { 00:11:23.798 "method": "nvmf_create_subsystem", 00:11:23.798 "params": { 00:11:23.798 "nqn": "nqn.2016-06.io.spdk:cnode12893", 00:11:23.798 "serial_number": "xm(C7m5Fbtze@|E^))zY(" 00:11:23.798 } 00:11:23.798 } 00:11:23.798 Got JSON-RPC error response 00:11:23.798 GoRPCClient: error on JSON-RPC call' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/12/06 17:11:05 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12893 serial_number:xm(C7m5Fbtze@|E^))zY(], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN xm(C7m5Fbtze@|E^))zY( 00:11:23.798 request: 00:11:23.798 { 00:11:23.798 "method": "nvmf_create_subsystem", 00:11:23.798 "params": { 00:11:23.798 "nqn": "nqn.2016-06.io.spdk:cnode12893", 00:11:23.798 "serial_number": "xm(C7m5Fbtze@|E^))zY(" 00:11:23.798 } 00:11:23.798 } 00:11:23.798 Got JSON-RPC error response 00:11:23.798 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.798 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:23.799 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.076 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'j|/[jm"6IjE Bt)p ys4z5RbDMYAm`?0[u#idOrz' 00:11:24.077 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'j|/[jm"6IjE Bt)p ys4z5RbDMYAm`?0[u#idOrz' nqn.2016-06.io.spdk:cnode24661 00:11:24.335 [2024-12-06 17:11:06.457209] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24661: invalid model number 'j|/[jm"6IjE Bt)p ys4z5RbDMYAm`?0[u#idOrz' 00:11:24.335 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/12/06 17:11:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:j|/[jm"6IjE Bt)p ys4z5RbDMYAm`?0[u#idOrz nqn:nqn.2016-06.io.spdk:cnode24661], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN j|/[jm"6IjE Bt)p ys4z5RbDMYAm`?0[u#idOrz 00:11:24.335 request: 00:11:24.335 { 00:11:24.335 "method": "nvmf_create_subsystem", 00:11:24.335 "params": { 00:11:24.335 "nqn": "nqn.2016-06.io.spdk:cnode24661", 00:11:24.335 "model_number": "j|/[jm\"6IjE Bt)p ys4z5Rb\u007fDMYAm`?0[u#idOrz" 00:11:24.335 } 00:11:24.335 } 00:11:24.335 Got JSON-RPC error response 00:11:24.335 GoRPCClient: error on JSON-RPC call' 00:11:24.335 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/12/06 17:11:06 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:j|/[jm"6IjE Bt)p ys4z5RbDMYAm`?0[u#idOrz nqn:nqn.2016-06.io.spdk:cnode24661], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN j|/[jm"6IjE Bt)p ys4z5RbDMYAm`?0[u#idOrz 00:11:24.335 request: 00:11:24.335 { 00:11:24.335 "method": "nvmf_create_subsystem", 00:11:24.335 "params": { 00:11:24.335 "nqn": "nqn.2016-06.io.spdk:cnode24661", 00:11:24.335 "model_number": "j|/[jm\"6IjE Bt)p ys4z5Rb\u007fDMYAm`?0[u#idOrz" 00:11:24.335 } 00:11:24.335 } 00:11:24.335 Got JSON-RPC error response 00:11:24.335 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:24.335 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:24.594 [2024-12-06 17:11:06.729542] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.594 17:11:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:24.852 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:24.852 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:24.852 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:24.852 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:24.852 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:25.110 [2024-12-06 17:11:07.386136] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:25.368 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/12/06 17:11:07 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:25.368 request: 00:11:25.368 { 00:11:25.368 "method": "nvmf_subsystem_remove_listener", 00:11:25.368 "params": { 00:11:25.368 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:25.368 "listen_address": { 00:11:25.368 "trtype": "tcp", 00:11:25.368 "traddr": "", 00:11:25.368 "trsvcid": "4421" 00:11:25.368 } 00:11:25.368 } 00:11:25.368 } 00:11:25.368 Got JSON-RPC error response 00:11:25.368 GoRPCClient: error on JSON-RPC call' 00:11:25.368 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/12/06 17:11:07 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:11:25.368 request: 00:11:25.368 { 00:11:25.368 "method": "nvmf_subsystem_remove_listener", 00:11:25.368 "params": { 00:11:25.368 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:25.368 "listen_address": { 00:11:25.368 "trtype": "tcp", 00:11:25.368 "traddr": "", 00:11:25.368 "trsvcid": "4421" 00:11:25.368 } 00:11:25.368 } 00:11:25.368 } 00:11:25.368 Got JSON-RPC error response 00:11:25.368 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:25.368 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31850 -i 0 00:11:25.368 [2024-12-06 17:11:07.654333] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31850: invalid cntlid range [0-65519] 00:11:25.627 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/12/06 17:11:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31850], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:25.627 request: 00:11:25.627 { 00:11:25.627 "method": "nvmf_create_subsystem", 00:11:25.627 "params": { 00:11:25.627 "nqn": "nqn.2016-06.io.spdk:cnode31850", 00:11:25.627 "min_cntlid": 0 00:11:25.627 } 00:11:25.627 } 00:11:25.627 Got JSON-RPC error response 00:11:25.627 GoRPCClient: error on JSON-RPC call' 00:11:25.627 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/12/06 17:11:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode31850], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:11:25.627 request: 00:11:25.627 { 00:11:25.627 "method": "nvmf_create_subsystem", 00:11:25.627 "params": { 00:11:25.627 "nqn": "nqn.2016-06.io.spdk:cnode31850", 00:11:25.627 "min_cntlid": 0 00:11:25.627 } 00:11:25.627 } 00:11:25.627 Got JSON-RPC error response 00:11:25.627 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:25.627 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11247 -i 65520 00:11:25.885 [2024-12-06 17:11:07.952343] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11247: invalid cntlid range [65520-65519] 00:11:25.885 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/12/06 17:11:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11247], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:25.885 request: 00:11:25.885 { 00:11:25.885 "method": "nvmf_create_subsystem", 00:11:25.885 "params": { 00:11:25.885 "nqn": "nqn.2016-06.io.spdk:cnode11247", 00:11:25.885 "min_cntlid": 65520 00:11:25.885 } 00:11:25.885 } 00:11:25.885 Got JSON-RPC error response 00:11:25.885 GoRPCClient: error on JSON-RPC call' 00:11:25.885 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/12/06 17:11:07 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode11247], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:11:25.885 request: 00:11:25.885 { 00:11:25.885 "method": "nvmf_create_subsystem", 00:11:25.885 "params": { 00:11:25.885 "nqn": "nqn.2016-06.io.spdk:cnode11247", 00:11:25.885 "min_cntlid": 65520 00:11:25.885 } 00:11:25.885 } 00:11:25.886 Got JSON-RPC error response 00:11:25.886 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:25.886 17:11:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5552 -I 0 00:11:26.143 [2024-12-06 17:11:08.212525] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5552: invalid cntlid range [1-0] 00:11:26.143 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/12/06 17:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5552], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:26.143 request: 00:11:26.143 { 00:11:26.143 "method": "nvmf_create_subsystem", 00:11:26.143 "params": { 00:11:26.143 "nqn": "nqn.2016-06.io.spdk:cnode5552", 00:11:26.143 "max_cntlid": 0 00:11:26.143 } 00:11:26.143 } 00:11:26.143 Got JSON-RPC error response 00:11:26.143 GoRPCClient: error on JSON-RPC call' 00:11:26.143 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/12/06 17:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode5552], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:11:26.143 request: 00:11:26.143 { 00:11:26.143 "method": "nvmf_create_subsystem", 00:11:26.143 "params": { 00:11:26.143 "nqn": "nqn.2016-06.io.spdk:cnode5552", 00:11:26.143 "max_cntlid": 0 00:11:26.143 } 00:11:26.143 } 00:11:26.143 Got JSON-RPC error response 00:11:26.144 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:26.144 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24496 -I 65520 00:11:26.401 [2024-12-06 17:11:08.548828] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24496: invalid cntlid range [1-65520] 00:11:26.401 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/12/06 17:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24496], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:26.401 request: 00:11:26.401 { 00:11:26.401 "method": "nvmf_create_subsystem", 00:11:26.401 "params": { 00:11:26.401 "nqn": "nqn.2016-06.io.spdk:cnode24496", 00:11:26.401 "max_cntlid": 65520 00:11:26.401 } 00:11:26.401 } 00:11:26.401 Got JSON-RPC error response 00:11:26.401 GoRPCClient: error on JSON-RPC call' 00:11:26.401 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/12/06 17:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode24496], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:11:26.401 request: 00:11:26.401 { 00:11:26.401 "method": "nvmf_create_subsystem", 00:11:26.401 "params": { 00:11:26.401 "nqn": "nqn.2016-06.io.spdk:cnode24496", 00:11:26.401 "max_cntlid": 65520 00:11:26.401 } 00:11:26.401 } 00:11:26.401 Got JSON-RPC error response 00:11:26.401 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:26.401 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20158 -i 6 -I 5 00:11:26.660 [2024-12-06 17:11:08.877150] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20158: invalid cntlid range [6-5] 00:11:26.660 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/12/06 17:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode20158], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:26.660 request: 00:11:26.660 { 00:11:26.660 "method": "nvmf_create_subsystem", 00:11:26.660 "params": { 00:11:26.660 "nqn": "nqn.2016-06.io.spdk:cnode20158", 00:11:26.660 "min_cntlid": 6, 00:11:26.660 "max_cntlid": 5 00:11:26.660 } 00:11:26.660 } 00:11:26.660 Got JSON-RPC error response 00:11:26.660 GoRPCClient: error on JSON-RPC call' 00:11:26.660 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/12/06 17:11:08 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode20158], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:11:26.660 request: 00:11:26.660 { 00:11:26.660 "method": "nvmf_create_subsystem", 00:11:26.660 "params": { 00:11:26.660 "nqn": "nqn.2016-06.io.spdk:cnode20158", 00:11:26.660 "min_cntlid": 6, 00:11:26.660 "max_cntlid": 5 00:11:26.660 } 00:11:26.660 } 00:11:26.660 Got JSON-RPC error response 00:11:26.660 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:26.660 17:11:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:26.918 { 00:11:26.918 "name": "foobar", 00:11:26.918 "method": "nvmf_delete_target", 00:11:26.918 "req_id": 1 00:11:26.918 } 00:11:26.918 Got JSON-RPC error response 00:11:26.918 response: 00:11:26.918 { 00:11:26.918 "code": -32602, 00:11:26.918 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:26.918 }' 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:26.918 { 00:11:26.918 "name": "foobar", 00:11:26.918 "method": "nvmf_delete_target", 00:11:26.918 "req_id": 1 00:11:26.918 } 00:11:26.918 Got JSON-RPC error response 00:11:26.918 response: 00:11:26.918 { 00:11:26.918 "code": -32602, 00:11:26.918 "message": "The specified target doesn't exist, cannot delete it." 00:11:26.918 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.918 rmmod nvme_tcp 00:11:26.918 rmmod nvme_fabrics 00:11:26.918 rmmod nvme_keyring 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 74221 ']' 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 74221 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 74221 ']' 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 74221 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:26.918 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.919 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74221 00:11:26.919 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.919 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.919 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74221' 00:11:26.919 killing process with pid 74221 00:11:26.919 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 74221 00:11:26.919 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 74221 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.177 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:11:27.435 ************************************ 00:11:27.435 END TEST nvmf_invalid 00:11:27.435 ************************************ 00:11:27.435 00:11:27.435 real 0m5.863s 00:11:27.435 user 0m22.704s 00:11:27.435 sys 0m1.270s 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.435 ************************************ 00:11:27.435 START TEST nvmf_connect_stress 00:11:27.435 ************************************ 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:27.435 * Looking for test storage... 00:11:27.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:27.435 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:27.694 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:27.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.696 --rc genhtml_branch_coverage=1 00:11:27.696 --rc genhtml_function_coverage=1 00:11:27.696 --rc genhtml_legend=1 00:11:27.696 --rc geninfo_all_blocks=1 00:11:27.696 --rc geninfo_unexecuted_blocks=1 00:11:27.696 00:11:27.696 ' 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:27.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.696 --rc genhtml_branch_coverage=1 00:11:27.696 --rc genhtml_function_coverage=1 00:11:27.696 --rc genhtml_legend=1 00:11:27.696 --rc geninfo_all_blocks=1 00:11:27.696 --rc geninfo_unexecuted_blocks=1 00:11:27.696 00:11:27.696 ' 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:27.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.696 --rc genhtml_branch_coverage=1 00:11:27.696 --rc genhtml_function_coverage=1 00:11:27.696 --rc genhtml_legend=1 00:11:27.696 --rc geninfo_all_blocks=1 00:11:27.696 --rc geninfo_unexecuted_blocks=1 00:11:27.696 00:11:27.696 ' 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:27.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:27.696 --rc genhtml_branch_coverage=1 00:11:27.696 --rc genhtml_function_coverage=1 00:11:27.696 --rc genhtml_legend=1 00:11:27.696 --rc geninfo_all_blocks=1 00:11:27.696 --rc geninfo_unexecuted_blocks=1 00:11:27.696 00:11:27.696 ' 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.696 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:27.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:27.697 Cannot find device "nvmf_init_br" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:27.697 Cannot find device "nvmf_init_br2" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:27.697 Cannot find device "nvmf_tgt_br" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.697 Cannot find device "nvmf_tgt_br2" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:27.697 Cannot find device "nvmf_init_br" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:27.697 Cannot find device "nvmf_init_br2" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:27.697 Cannot find device "nvmf_tgt_br" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:27.697 Cannot find device "nvmf_tgt_br2" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:27.697 Cannot find device "nvmf_br" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:27.697 Cannot find device "nvmf_init_if" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:27.697 Cannot find device "nvmf_init_if2" 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.697 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.697 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.697 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:27.698 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.698 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.958 17:11:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:27.958 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:27.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:27.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:11:27.959 00:11:27.959 --- 10.0.0.3 ping statistics --- 00:11:27.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.959 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:27.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:27.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:11:27.959 00:11:27.959 --- 10.0.0.4 ping statistics --- 00:11:27.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.959 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:27.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:11:27.959 00:11:27.959 --- 10.0.0.1 ping statistics --- 00:11:27.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.959 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:27.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:11:27.959 00:11:27.959 --- 10.0.0.2 ping statistics --- 00:11:27.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.959 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=74768 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 74768 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 74768 ']' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.959 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.218 [2024-12-06 17:11:10.265541] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:11:28.218 [2024-12-06 17:11:10.265629] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.218 [2024-12-06 17:11:10.415961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.218 [2024-12-06 17:11:10.455623] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.218 [2024-12-06 17:11:10.455688] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.218 [2024-12-06 17:11:10.455703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.218 [2024-12-06 17:11:10.455713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.218 [2024-12-06 17:11:10.455722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.218 [2024-12-06 17:11:10.456668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.218 [2024-12-06 17:11:10.456790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.219 [2024-12-06 17:11:10.456791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.478 [2024-12-06 17:11:10.632995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.478 [2024-12-06 17:11:10.657167] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.478 NULL1 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74811 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.478 17:11:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.046 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.046 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:29.046 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.046 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.046 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.304 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.304 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:29.304 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.304 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.304 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.563 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.563 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:29.563 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.563 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.563 17:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.822 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.822 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:29.822 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.822 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.822 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.081 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.081 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:30.081 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.081 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.081 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.649 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.649 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:30.649 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.649 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.649 17:11:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.907 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.907 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:30.907 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.907 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.907 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.166 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.166 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:31.166 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.166 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.166 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.425 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.425 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:31.425 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.425 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.425 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.683 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.683 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:31.683 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.683 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.683 17:11:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.250 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.250 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:32.250 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.250 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.250 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.509 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.509 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:32.509 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.509 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.509 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.768 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.768 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:32.768 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.768 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.768 17:11:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.026 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.026 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:33.026 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.026 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.026 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.591 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.591 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:33.591 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.591 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.591 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.848 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.848 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:33.848 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.848 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.848 17:11:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.105 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.105 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:34.105 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.105 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.105 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.362 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.362 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:34.362 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.362 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.362 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.620 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.620 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:34.620 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.620 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.620 17:11:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.185 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.185 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:35.185 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.185 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.185 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.442 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.442 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:35.442 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.442 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.442 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.700 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.700 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:35.700 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.700 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.700 17:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.957 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.957 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:35.957 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.957 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.957 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.215 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.215 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:36.215 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.215 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.215 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.781 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.781 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:36.781 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.781 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.781 17:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.039 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.039 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:37.039 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.039 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.039 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.297 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.297 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:37.297 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.297 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.297 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.555 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.555 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:37.555 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.555 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.555 17:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.121 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.121 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:38.121 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.121 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.121 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.379 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.379 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:38.379 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.379 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.379 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.650 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.650 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:38.650 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.650 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.650 17:11:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.650 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74811 00:11:38.940 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74811) - No such process 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74811 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:38.940 rmmod nvme_tcp 00:11:38.940 rmmod nvme_fabrics 00:11:38.940 rmmod nvme_keyring 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 74768 ']' 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 74768 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 74768 ']' 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 74768 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74768 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:38.940 killing process with pid 74768 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74768' 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 74768 00:11:38.940 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 74768 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:39.211 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:11:39.469 00:11:39.469 real 0m11.989s 00:11:39.469 user 0m39.132s 00:11:39.469 sys 0m3.357s 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.469 ************************************ 00:11:39.469 END TEST nvmf_connect_stress 00:11:39.469 ************************************ 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:39.469 ************************************ 00:11:39.469 START TEST nvmf_fused_ordering 00:11:39.469 ************************************ 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.469 * Looking for test storage... 00:11:39.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:39.469 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.728 --rc genhtml_branch_coverage=1 00:11:39.728 --rc genhtml_function_coverage=1 00:11:39.728 --rc genhtml_legend=1 00:11:39.728 --rc geninfo_all_blocks=1 00:11:39.728 --rc geninfo_unexecuted_blocks=1 00:11:39.728 00:11:39.728 ' 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.728 --rc genhtml_branch_coverage=1 00:11:39.728 --rc genhtml_function_coverage=1 00:11:39.728 --rc genhtml_legend=1 00:11:39.728 --rc geninfo_all_blocks=1 00:11:39.728 --rc geninfo_unexecuted_blocks=1 00:11:39.728 00:11:39.728 ' 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.728 --rc genhtml_branch_coverage=1 00:11:39.728 --rc genhtml_function_coverage=1 00:11:39.728 --rc genhtml_legend=1 00:11:39.728 --rc geninfo_all_blocks=1 00:11:39.728 --rc geninfo_unexecuted_blocks=1 00:11:39.728 00:11:39.728 ' 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:39.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.728 --rc genhtml_branch_coverage=1 00:11:39.728 --rc genhtml_function_coverage=1 00:11:39.728 --rc genhtml_legend=1 00:11:39.728 --rc geninfo_all_blocks=1 00:11:39.728 --rc geninfo_unexecuted_blocks=1 00:11:39.728 00:11:39.728 ' 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.728 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.729 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:39.729 Cannot find device "nvmf_init_br" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:39.729 Cannot find device "nvmf_init_br2" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:39.729 Cannot find device "nvmf_tgt_br" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:39.729 Cannot find device "nvmf_tgt_br2" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:39.729 Cannot find device "nvmf_init_br" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:39.729 Cannot find device "nvmf_init_br2" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:39.729 Cannot find device "nvmf_tgt_br" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:39.729 Cannot find device "nvmf_tgt_br2" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:39.729 Cannot find device "nvmf_br" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:39.729 Cannot find device "nvmf_init_if" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:39.729 Cannot find device "nvmf_init_if2" 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:39.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:39.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:11:39.729 17:11:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:39.729 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:39.729 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:39.729 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:39.729 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:39.729 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:39.988 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:39.988 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:11:39.988 00:11:39.988 --- 10.0.0.3 ping statistics --- 00:11:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.988 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:39.988 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:39.988 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:11:39.988 00:11:39.988 --- 10.0.0.4 ping statistics --- 00:11:39.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.988 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:39.988 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:39.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:39.988 00:11:39.988 --- 10.0.0.1 ping statistics --- 00:11:39.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.989 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:39.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:11:39.989 00:11:39.989 --- 10.0.0.2 ping statistics --- 00:11:39.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.989 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=75182 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 75182 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 75182 ']' 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.989 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.247 [2024-12-06 17:11:22.307834] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:11:40.247 [2024-12-06 17:11:22.308482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.247 [2024-12-06 17:11:22.450367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.247 [2024-12-06 17:11:22.482010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.247 [2024-12-06 17:11:22.482065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.247 [2024-12-06 17:11:22.482077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.247 [2024-12-06 17:11:22.482085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.247 [2024-12-06 17:11:22.482092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.247 [2024-12-06 17:11:22.482420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 [2024-12-06 17:11:22.606903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 [2024-12-06 17:11:22.627031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 NULL1 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.507 17:11:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:40.507 [2024-12-06 17:11:22.680871] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:11:40.507 [2024-12-06 17:11:22.680921] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75213 ] 00:11:41.072 Attached to nqn.2016-06.io.spdk:cnode1 00:11:41.072 Namespace ID: 1 size: 1GB 00:11:41.072 fused_ordering(0) 00:11:41.072 fused_ordering(1) 00:11:41.072 fused_ordering(2) 00:11:41.072 fused_ordering(3) 00:11:41.072 fused_ordering(4) 00:11:41.072 fused_ordering(5) 00:11:41.072 fused_ordering(6) 00:11:41.072 fused_ordering(7) 00:11:41.072 fused_ordering(8) 00:11:41.072 fused_ordering(9) 00:11:41.072 fused_ordering(10) 00:11:41.072 fused_ordering(11) 00:11:41.072 fused_ordering(12) 00:11:41.072 fused_ordering(13) 00:11:41.072 fused_ordering(14) 00:11:41.072 fused_ordering(15) 00:11:41.072 fused_ordering(16) 00:11:41.072 fused_ordering(17) 00:11:41.072 fused_ordering(18) 00:11:41.072 fused_ordering(19) 00:11:41.072 fused_ordering(20) 00:11:41.072 fused_ordering(21) 00:11:41.072 fused_ordering(22) 00:11:41.072 fused_ordering(23) 00:11:41.072 fused_ordering(24) 00:11:41.072 fused_ordering(25) 00:11:41.072 fused_ordering(26) 00:11:41.072 fused_ordering(27) 00:11:41.072 fused_ordering(28) 00:11:41.072 fused_ordering(29) 00:11:41.072 fused_ordering(30) 00:11:41.072 fused_ordering(31) 00:11:41.072 fused_ordering(32) 00:11:41.072 fused_ordering(33) 00:11:41.072 fused_ordering(34) 00:11:41.072 fused_ordering(35) 00:11:41.072 fused_ordering(36) 00:11:41.072 fused_ordering(37) 00:11:41.072 fused_ordering(38) 00:11:41.072 fused_ordering(39) 00:11:41.072 fused_ordering(40) 00:11:41.072 fused_ordering(41) 00:11:41.072 fused_ordering(42) 00:11:41.072 fused_ordering(43) 00:11:41.072 fused_ordering(44) 00:11:41.072 fused_ordering(45) 00:11:41.072 fused_ordering(46) 00:11:41.072 fused_ordering(47) 00:11:41.072 fused_ordering(48) 00:11:41.072 fused_ordering(49) 00:11:41.072 fused_ordering(50) 00:11:41.072 fused_ordering(51) 00:11:41.072 fused_ordering(52) 00:11:41.072 fused_ordering(53) 00:11:41.072 fused_ordering(54) 00:11:41.072 fused_ordering(55) 00:11:41.072 fused_ordering(56) 00:11:41.072 fused_ordering(57) 00:11:41.072 fused_ordering(58) 00:11:41.072 fused_ordering(59) 00:11:41.072 fused_ordering(60) 00:11:41.072 fused_ordering(61) 00:11:41.072 fused_ordering(62) 00:11:41.072 fused_ordering(63) 00:11:41.072 fused_ordering(64) 00:11:41.072 fused_ordering(65) 00:11:41.072 fused_ordering(66) 00:11:41.072 fused_ordering(67) 00:11:41.072 fused_ordering(68) 00:11:41.072 fused_ordering(69) 00:11:41.072 fused_ordering(70) 00:11:41.072 fused_ordering(71) 00:11:41.072 fused_ordering(72) 00:11:41.072 fused_ordering(73) 00:11:41.072 fused_ordering(74) 00:11:41.072 fused_ordering(75) 00:11:41.072 fused_ordering(76) 00:11:41.072 fused_ordering(77) 00:11:41.072 fused_ordering(78) 00:11:41.072 fused_ordering(79) 00:11:41.072 fused_ordering(80) 00:11:41.072 fused_ordering(81) 00:11:41.072 fused_ordering(82) 00:11:41.072 fused_ordering(83) 00:11:41.072 fused_ordering(84) 00:11:41.072 fused_ordering(85) 00:11:41.072 fused_ordering(86) 00:11:41.072 fused_ordering(87) 00:11:41.072 fused_ordering(88) 00:11:41.072 fused_ordering(89) 00:11:41.072 fused_ordering(90) 00:11:41.072 fused_ordering(91) 00:11:41.072 fused_ordering(92) 00:11:41.072 fused_ordering(93) 00:11:41.072 fused_ordering(94) 00:11:41.072 fused_ordering(95) 00:11:41.072 fused_ordering(96) 00:11:41.072 fused_ordering(97) 00:11:41.072 fused_ordering(98) 00:11:41.072 fused_ordering(99) 00:11:41.072 fused_ordering(100) 00:11:41.072 fused_ordering(101) 00:11:41.072 fused_ordering(102) 00:11:41.072 fused_ordering(103) 00:11:41.072 fused_ordering(104) 00:11:41.072 fused_ordering(105) 00:11:41.072 fused_ordering(106) 00:11:41.072 fused_ordering(107) 00:11:41.072 fused_ordering(108) 00:11:41.072 fused_ordering(109) 00:11:41.072 fused_ordering(110) 00:11:41.072 fused_ordering(111) 00:11:41.072 fused_ordering(112) 00:11:41.072 fused_ordering(113) 00:11:41.072 fused_ordering(114) 00:11:41.072 fused_ordering(115) 00:11:41.072 fused_ordering(116) 00:11:41.072 fused_ordering(117) 00:11:41.072 fused_ordering(118) 00:11:41.072 fused_ordering(119) 00:11:41.072 fused_ordering(120) 00:11:41.072 fused_ordering(121) 00:11:41.072 fused_ordering(122) 00:11:41.072 fused_ordering(123) 00:11:41.072 fused_ordering(124) 00:11:41.072 fused_ordering(125) 00:11:41.072 fused_ordering(126) 00:11:41.072 fused_ordering(127) 00:11:41.072 fused_ordering(128) 00:11:41.072 fused_ordering(129) 00:11:41.072 fused_ordering(130) 00:11:41.072 fused_ordering(131) 00:11:41.072 fused_ordering(132) 00:11:41.072 fused_ordering(133) 00:11:41.072 fused_ordering(134) 00:11:41.072 fused_ordering(135) 00:11:41.072 fused_ordering(136) 00:11:41.072 fused_ordering(137) 00:11:41.072 fused_ordering(138) 00:11:41.072 fused_ordering(139) 00:11:41.072 fused_ordering(140) 00:11:41.073 fused_ordering(141) 00:11:41.073 fused_ordering(142) 00:11:41.073 fused_ordering(143) 00:11:41.073 fused_ordering(144) 00:11:41.073 fused_ordering(145) 00:11:41.073 fused_ordering(146) 00:11:41.073 fused_ordering(147) 00:11:41.073 fused_ordering(148) 00:11:41.073 fused_ordering(149) 00:11:41.073 fused_ordering(150) 00:11:41.073 fused_ordering(151) 00:11:41.073 fused_ordering(152) 00:11:41.073 fused_ordering(153) 00:11:41.073 fused_ordering(154) 00:11:41.073 fused_ordering(155) 00:11:41.073 fused_ordering(156) 00:11:41.073 fused_ordering(157) 00:11:41.073 fused_ordering(158) 00:11:41.073 fused_ordering(159) 00:11:41.073 fused_ordering(160) 00:11:41.073 fused_ordering(161) 00:11:41.073 fused_ordering(162) 00:11:41.073 fused_ordering(163) 00:11:41.073 fused_ordering(164) 00:11:41.073 fused_ordering(165) 00:11:41.073 fused_ordering(166) 00:11:41.073 fused_ordering(167) 00:11:41.073 fused_ordering(168) 00:11:41.073 fused_ordering(169) 00:11:41.073 fused_ordering(170) 00:11:41.073 fused_ordering(171) 00:11:41.073 fused_ordering(172) 00:11:41.073 fused_ordering(173) 00:11:41.073 fused_ordering(174) 00:11:41.073 fused_ordering(175) 00:11:41.073 fused_ordering(176) 00:11:41.073 fused_ordering(177) 00:11:41.073 fused_ordering(178) 00:11:41.073 fused_ordering(179) 00:11:41.073 fused_ordering(180) 00:11:41.073 fused_ordering(181) 00:11:41.073 fused_ordering(182) 00:11:41.073 fused_ordering(183) 00:11:41.073 fused_ordering(184) 00:11:41.073 fused_ordering(185) 00:11:41.073 fused_ordering(186) 00:11:41.073 fused_ordering(187) 00:11:41.073 fused_ordering(188) 00:11:41.073 fused_ordering(189) 00:11:41.073 fused_ordering(190) 00:11:41.073 fused_ordering(191) 00:11:41.073 fused_ordering(192) 00:11:41.073 fused_ordering(193) 00:11:41.073 fused_ordering(194) 00:11:41.073 fused_ordering(195) 00:11:41.073 fused_ordering(196) 00:11:41.073 fused_ordering(197) 00:11:41.073 fused_ordering(198) 00:11:41.073 fused_ordering(199) 00:11:41.073 fused_ordering(200) 00:11:41.073 fused_ordering(201) 00:11:41.073 fused_ordering(202) 00:11:41.073 fused_ordering(203) 00:11:41.073 fused_ordering(204) 00:11:41.073 fused_ordering(205) 00:11:41.331 fused_ordering(206) 00:11:41.331 fused_ordering(207) 00:11:41.331 fused_ordering(208) 00:11:41.331 fused_ordering(209) 00:11:41.331 fused_ordering(210) 00:11:41.331 fused_ordering(211) 00:11:41.331 fused_ordering(212) 00:11:41.331 fused_ordering(213) 00:11:41.331 fused_ordering(214) 00:11:41.331 fused_ordering(215) 00:11:41.331 fused_ordering(216) 00:11:41.331 fused_ordering(217) 00:11:41.331 fused_ordering(218) 00:11:41.331 fused_ordering(219) 00:11:41.331 fused_ordering(220) 00:11:41.331 fused_ordering(221) 00:11:41.331 fused_ordering(222) 00:11:41.331 fused_ordering(223) 00:11:41.331 fused_ordering(224) 00:11:41.331 fused_ordering(225) 00:11:41.331 fused_ordering(226) 00:11:41.331 fused_ordering(227) 00:11:41.331 fused_ordering(228) 00:11:41.331 fused_ordering(229) 00:11:41.331 fused_ordering(230) 00:11:41.331 fused_ordering(231) 00:11:41.331 fused_ordering(232) 00:11:41.331 fused_ordering(233) 00:11:41.331 fused_ordering(234) 00:11:41.331 fused_ordering(235) 00:11:41.331 fused_ordering(236) 00:11:41.331 fused_ordering(237) 00:11:41.331 fused_ordering(238) 00:11:41.331 fused_ordering(239) 00:11:41.331 fused_ordering(240) 00:11:41.331 fused_ordering(241) 00:11:41.331 fused_ordering(242) 00:11:41.331 fused_ordering(243) 00:11:41.331 fused_ordering(244) 00:11:41.331 fused_ordering(245) 00:11:41.331 fused_ordering(246) 00:11:41.331 fused_ordering(247) 00:11:41.331 fused_ordering(248) 00:11:41.331 fused_ordering(249) 00:11:41.331 fused_ordering(250) 00:11:41.331 fused_ordering(251) 00:11:41.331 fused_ordering(252) 00:11:41.331 fused_ordering(253) 00:11:41.331 fused_ordering(254) 00:11:41.331 fused_ordering(255) 00:11:41.331 fused_ordering(256) 00:11:41.331 fused_ordering(257) 00:11:41.331 fused_ordering(258) 00:11:41.331 fused_ordering(259) 00:11:41.331 fused_ordering(260) 00:11:41.331 fused_ordering(261) 00:11:41.331 fused_ordering(262) 00:11:41.331 fused_ordering(263) 00:11:41.331 fused_ordering(264) 00:11:41.331 fused_ordering(265) 00:11:41.331 fused_ordering(266) 00:11:41.331 fused_ordering(267) 00:11:41.331 fused_ordering(268) 00:11:41.331 fused_ordering(269) 00:11:41.331 fused_ordering(270) 00:11:41.331 fused_ordering(271) 00:11:41.331 fused_ordering(272) 00:11:41.331 fused_ordering(273) 00:11:41.331 fused_ordering(274) 00:11:41.331 fused_ordering(275) 00:11:41.331 fused_ordering(276) 00:11:41.331 fused_ordering(277) 00:11:41.331 fused_ordering(278) 00:11:41.331 fused_ordering(279) 00:11:41.331 fused_ordering(280) 00:11:41.331 fused_ordering(281) 00:11:41.331 fused_ordering(282) 00:11:41.331 fused_ordering(283) 00:11:41.331 fused_ordering(284) 00:11:41.331 fused_ordering(285) 00:11:41.331 fused_ordering(286) 00:11:41.331 fused_ordering(287) 00:11:41.331 fused_ordering(288) 00:11:41.331 fused_ordering(289) 00:11:41.331 fused_ordering(290) 00:11:41.331 fused_ordering(291) 00:11:41.331 fused_ordering(292) 00:11:41.331 fused_ordering(293) 00:11:41.331 fused_ordering(294) 00:11:41.331 fused_ordering(295) 00:11:41.331 fused_ordering(296) 00:11:41.331 fused_ordering(297) 00:11:41.331 fused_ordering(298) 00:11:41.331 fused_ordering(299) 00:11:41.331 fused_ordering(300) 00:11:41.331 fused_ordering(301) 00:11:41.331 fused_ordering(302) 00:11:41.331 fused_ordering(303) 00:11:41.331 fused_ordering(304) 00:11:41.331 fused_ordering(305) 00:11:41.331 fused_ordering(306) 00:11:41.331 fused_ordering(307) 00:11:41.331 fused_ordering(308) 00:11:41.331 fused_ordering(309) 00:11:41.331 fused_ordering(310) 00:11:41.331 fused_ordering(311) 00:11:41.331 fused_ordering(312) 00:11:41.331 fused_ordering(313) 00:11:41.331 fused_ordering(314) 00:11:41.331 fused_ordering(315) 00:11:41.331 fused_ordering(316) 00:11:41.331 fused_ordering(317) 00:11:41.331 fused_ordering(318) 00:11:41.331 fused_ordering(319) 00:11:41.331 fused_ordering(320) 00:11:41.331 fused_ordering(321) 00:11:41.331 fused_ordering(322) 00:11:41.331 fused_ordering(323) 00:11:41.331 fused_ordering(324) 00:11:41.331 fused_ordering(325) 00:11:41.331 fused_ordering(326) 00:11:41.331 fused_ordering(327) 00:11:41.331 fused_ordering(328) 00:11:41.331 fused_ordering(329) 00:11:41.331 fused_ordering(330) 00:11:41.331 fused_ordering(331) 00:11:41.331 fused_ordering(332) 00:11:41.331 fused_ordering(333) 00:11:41.331 fused_ordering(334) 00:11:41.331 fused_ordering(335) 00:11:41.331 fused_ordering(336) 00:11:41.331 fused_ordering(337) 00:11:41.331 fused_ordering(338) 00:11:41.331 fused_ordering(339) 00:11:41.331 fused_ordering(340) 00:11:41.331 fused_ordering(341) 00:11:41.331 fused_ordering(342) 00:11:41.331 fused_ordering(343) 00:11:41.331 fused_ordering(344) 00:11:41.331 fused_ordering(345) 00:11:41.331 fused_ordering(346) 00:11:41.331 fused_ordering(347) 00:11:41.331 fused_ordering(348) 00:11:41.331 fused_ordering(349) 00:11:41.331 fused_ordering(350) 00:11:41.331 fused_ordering(351) 00:11:41.331 fused_ordering(352) 00:11:41.331 fused_ordering(353) 00:11:41.331 fused_ordering(354) 00:11:41.331 fused_ordering(355) 00:11:41.331 fused_ordering(356) 00:11:41.331 fused_ordering(357) 00:11:41.331 fused_ordering(358) 00:11:41.331 fused_ordering(359) 00:11:41.331 fused_ordering(360) 00:11:41.331 fused_ordering(361) 00:11:41.331 fused_ordering(362) 00:11:41.331 fused_ordering(363) 00:11:41.331 fused_ordering(364) 00:11:41.331 fused_ordering(365) 00:11:41.331 fused_ordering(366) 00:11:41.332 fused_ordering(367) 00:11:41.332 fused_ordering(368) 00:11:41.332 fused_ordering(369) 00:11:41.332 fused_ordering(370) 00:11:41.332 fused_ordering(371) 00:11:41.332 fused_ordering(372) 00:11:41.332 fused_ordering(373) 00:11:41.332 fused_ordering(374) 00:11:41.332 fused_ordering(375) 00:11:41.332 fused_ordering(376) 00:11:41.332 fused_ordering(377) 00:11:41.332 fused_ordering(378) 00:11:41.332 fused_ordering(379) 00:11:41.332 fused_ordering(380) 00:11:41.332 fused_ordering(381) 00:11:41.332 fused_ordering(382) 00:11:41.332 fused_ordering(383) 00:11:41.332 fused_ordering(384) 00:11:41.332 fused_ordering(385) 00:11:41.332 fused_ordering(386) 00:11:41.332 fused_ordering(387) 00:11:41.332 fused_ordering(388) 00:11:41.332 fused_ordering(389) 00:11:41.332 fused_ordering(390) 00:11:41.332 fused_ordering(391) 00:11:41.332 fused_ordering(392) 00:11:41.332 fused_ordering(393) 00:11:41.332 fused_ordering(394) 00:11:41.332 fused_ordering(395) 00:11:41.332 fused_ordering(396) 00:11:41.332 fused_ordering(397) 00:11:41.332 fused_ordering(398) 00:11:41.332 fused_ordering(399) 00:11:41.332 fused_ordering(400) 00:11:41.332 fused_ordering(401) 00:11:41.332 fused_ordering(402) 00:11:41.332 fused_ordering(403) 00:11:41.332 fused_ordering(404) 00:11:41.332 fused_ordering(405) 00:11:41.332 fused_ordering(406) 00:11:41.332 fused_ordering(407) 00:11:41.332 fused_ordering(408) 00:11:41.332 fused_ordering(409) 00:11:41.332 fused_ordering(410) 00:11:41.590 fused_ordering(411) 00:11:41.590 fused_ordering(412) 00:11:41.590 fused_ordering(413) 00:11:41.590 fused_ordering(414) 00:11:41.590 fused_ordering(415) 00:11:41.590 fused_ordering(416) 00:11:41.590 fused_ordering(417) 00:11:41.590 fused_ordering(418) 00:11:41.590 fused_ordering(419) 00:11:41.590 fused_ordering(420) 00:11:41.590 fused_ordering(421) 00:11:41.590 fused_ordering(422) 00:11:41.590 fused_ordering(423) 00:11:41.590 fused_ordering(424) 00:11:41.590 fused_ordering(425) 00:11:41.590 fused_ordering(426) 00:11:41.590 fused_ordering(427) 00:11:41.590 fused_ordering(428) 00:11:41.590 fused_ordering(429) 00:11:41.590 fused_ordering(430) 00:11:41.590 fused_ordering(431) 00:11:41.590 fused_ordering(432) 00:11:41.590 fused_ordering(433) 00:11:41.590 fused_ordering(434) 00:11:41.590 fused_ordering(435) 00:11:41.590 fused_ordering(436) 00:11:41.590 fused_ordering(437) 00:11:41.590 fused_ordering(438) 00:11:41.590 fused_ordering(439) 00:11:41.590 fused_ordering(440) 00:11:41.590 fused_ordering(441) 00:11:41.590 fused_ordering(442) 00:11:41.590 fused_ordering(443) 00:11:41.590 fused_ordering(444) 00:11:41.590 fused_ordering(445) 00:11:41.590 fused_ordering(446) 00:11:41.590 fused_ordering(447) 00:11:41.590 fused_ordering(448) 00:11:41.590 fused_ordering(449) 00:11:41.590 fused_ordering(450) 00:11:41.590 fused_ordering(451) 00:11:41.590 fused_ordering(452) 00:11:41.590 fused_ordering(453) 00:11:41.590 fused_ordering(454) 00:11:41.590 fused_ordering(455) 00:11:41.590 fused_ordering(456) 00:11:41.590 fused_ordering(457) 00:11:41.590 fused_ordering(458) 00:11:41.590 fused_ordering(459) 00:11:41.590 fused_ordering(460) 00:11:41.590 fused_ordering(461) 00:11:41.590 fused_ordering(462) 00:11:41.590 fused_ordering(463) 00:11:41.590 fused_ordering(464) 00:11:41.590 fused_ordering(465) 00:11:41.590 fused_ordering(466) 00:11:41.590 fused_ordering(467) 00:11:41.590 fused_ordering(468) 00:11:41.590 fused_ordering(469) 00:11:41.590 fused_ordering(470) 00:11:41.590 fused_ordering(471) 00:11:41.590 fused_ordering(472) 00:11:41.590 fused_ordering(473) 00:11:41.590 fused_ordering(474) 00:11:41.590 fused_ordering(475) 00:11:41.590 fused_ordering(476) 00:11:41.590 fused_ordering(477) 00:11:41.590 fused_ordering(478) 00:11:41.590 fused_ordering(479) 00:11:41.590 fused_ordering(480) 00:11:41.590 fused_ordering(481) 00:11:41.590 fused_ordering(482) 00:11:41.590 fused_ordering(483) 00:11:41.590 fused_ordering(484) 00:11:41.590 fused_ordering(485) 00:11:41.590 fused_ordering(486) 00:11:41.590 fused_ordering(487) 00:11:41.590 fused_ordering(488) 00:11:41.590 fused_ordering(489) 00:11:41.590 fused_ordering(490) 00:11:41.590 fused_ordering(491) 00:11:41.590 fused_ordering(492) 00:11:41.590 fused_ordering(493) 00:11:41.590 fused_ordering(494) 00:11:41.590 fused_ordering(495) 00:11:41.590 fused_ordering(496) 00:11:41.590 fused_ordering(497) 00:11:41.590 fused_ordering(498) 00:11:41.590 fused_ordering(499) 00:11:41.590 fused_ordering(500) 00:11:41.590 fused_ordering(501) 00:11:41.590 fused_ordering(502) 00:11:41.590 fused_ordering(503) 00:11:41.590 fused_ordering(504) 00:11:41.590 fused_ordering(505) 00:11:41.590 fused_ordering(506) 00:11:41.590 fused_ordering(507) 00:11:41.590 fused_ordering(508) 00:11:41.590 fused_ordering(509) 00:11:41.590 fused_ordering(510) 00:11:41.590 fused_ordering(511) 00:11:41.590 fused_ordering(512) 00:11:41.590 fused_ordering(513) 00:11:41.590 fused_ordering(514) 00:11:41.590 fused_ordering(515) 00:11:41.590 fused_ordering(516) 00:11:41.590 fused_ordering(517) 00:11:41.590 fused_ordering(518) 00:11:41.590 fused_ordering(519) 00:11:41.590 fused_ordering(520) 00:11:41.590 fused_ordering(521) 00:11:41.590 fused_ordering(522) 00:11:41.590 fused_ordering(523) 00:11:41.590 fused_ordering(524) 00:11:41.590 fused_ordering(525) 00:11:41.590 fused_ordering(526) 00:11:41.590 fused_ordering(527) 00:11:41.590 fused_ordering(528) 00:11:41.590 fused_ordering(529) 00:11:41.590 fused_ordering(530) 00:11:41.590 fused_ordering(531) 00:11:41.590 fused_ordering(532) 00:11:41.590 fused_ordering(533) 00:11:41.590 fused_ordering(534) 00:11:41.590 fused_ordering(535) 00:11:41.590 fused_ordering(536) 00:11:41.590 fused_ordering(537) 00:11:41.590 fused_ordering(538) 00:11:41.590 fused_ordering(539) 00:11:41.590 fused_ordering(540) 00:11:41.590 fused_ordering(541) 00:11:41.590 fused_ordering(542) 00:11:41.590 fused_ordering(543) 00:11:41.590 fused_ordering(544) 00:11:41.590 fused_ordering(545) 00:11:41.590 fused_ordering(546) 00:11:41.590 fused_ordering(547) 00:11:41.590 fused_ordering(548) 00:11:41.590 fused_ordering(549) 00:11:41.590 fused_ordering(550) 00:11:41.590 fused_ordering(551) 00:11:41.590 fused_ordering(552) 00:11:41.590 fused_ordering(553) 00:11:41.590 fused_ordering(554) 00:11:41.590 fused_ordering(555) 00:11:41.590 fused_ordering(556) 00:11:41.590 fused_ordering(557) 00:11:41.590 fused_ordering(558) 00:11:41.590 fused_ordering(559) 00:11:41.590 fused_ordering(560) 00:11:41.590 fused_ordering(561) 00:11:41.590 fused_ordering(562) 00:11:41.590 fused_ordering(563) 00:11:41.590 fused_ordering(564) 00:11:41.590 fused_ordering(565) 00:11:41.590 fused_ordering(566) 00:11:41.590 fused_ordering(567) 00:11:41.590 fused_ordering(568) 00:11:41.590 fused_ordering(569) 00:11:41.590 fused_ordering(570) 00:11:41.590 fused_ordering(571) 00:11:41.590 fused_ordering(572) 00:11:41.590 fused_ordering(573) 00:11:41.590 fused_ordering(574) 00:11:41.590 fused_ordering(575) 00:11:41.590 fused_ordering(576) 00:11:41.590 fused_ordering(577) 00:11:41.590 fused_ordering(578) 00:11:41.590 fused_ordering(579) 00:11:41.590 fused_ordering(580) 00:11:41.590 fused_ordering(581) 00:11:41.590 fused_ordering(582) 00:11:41.590 fused_ordering(583) 00:11:41.590 fused_ordering(584) 00:11:41.590 fused_ordering(585) 00:11:41.590 fused_ordering(586) 00:11:41.590 fused_ordering(587) 00:11:41.590 fused_ordering(588) 00:11:41.590 fused_ordering(589) 00:11:41.590 fused_ordering(590) 00:11:41.590 fused_ordering(591) 00:11:41.590 fused_ordering(592) 00:11:41.590 fused_ordering(593) 00:11:41.590 fused_ordering(594) 00:11:41.590 fused_ordering(595) 00:11:41.590 fused_ordering(596) 00:11:41.590 fused_ordering(597) 00:11:41.590 fused_ordering(598) 00:11:41.590 fused_ordering(599) 00:11:41.590 fused_ordering(600) 00:11:41.590 fused_ordering(601) 00:11:41.590 fused_ordering(602) 00:11:41.590 fused_ordering(603) 00:11:41.590 fused_ordering(604) 00:11:41.590 fused_ordering(605) 00:11:41.590 fused_ordering(606) 00:11:41.590 fused_ordering(607) 00:11:41.590 fused_ordering(608) 00:11:41.590 fused_ordering(609) 00:11:41.590 fused_ordering(610) 00:11:41.590 fused_ordering(611) 00:11:41.590 fused_ordering(612) 00:11:41.590 fused_ordering(613) 00:11:41.590 fused_ordering(614) 00:11:41.590 fused_ordering(615) 00:11:42.156 fused_ordering(616) 00:11:42.156 fused_ordering(617) 00:11:42.156 fused_ordering(618) 00:11:42.156 fused_ordering(619) 00:11:42.156 fused_ordering(620) 00:11:42.156 fused_ordering(621) 00:11:42.156 fused_ordering(622) 00:11:42.156 fused_ordering(623) 00:11:42.156 fused_ordering(624) 00:11:42.156 fused_ordering(625) 00:11:42.156 fused_ordering(626) 00:11:42.156 fused_ordering(627) 00:11:42.156 fused_ordering(628) 00:11:42.156 fused_ordering(629) 00:11:42.156 fused_ordering(630) 00:11:42.156 fused_ordering(631) 00:11:42.156 fused_ordering(632) 00:11:42.156 fused_ordering(633) 00:11:42.156 fused_ordering(634) 00:11:42.156 fused_ordering(635) 00:11:42.156 fused_ordering(636) 00:11:42.156 fused_ordering(637) 00:11:42.156 fused_ordering(638) 00:11:42.156 fused_ordering(639) 00:11:42.156 fused_ordering(640) 00:11:42.156 fused_ordering(641) 00:11:42.156 fused_ordering(642) 00:11:42.156 fused_ordering(643) 00:11:42.156 fused_ordering(644) 00:11:42.156 fused_ordering(645) 00:11:42.156 fused_ordering(646) 00:11:42.156 fused_ordering(647) 00:11:42.156 fused_ordering(648) 00:11:42.156 fused_ordering(649) 00:11:42.156 fused_ordering(650) 00:11:42.156 fused_ordering(651) 00:11:42.156 fused_ordering(652) 00:11:42.156 fused_ordering(653) 00:11:42.156 fused_ordering(654) 00:11:42.156 fused_ordering(655) 00:11:42.156 fused_ordering(656) 00:11:42.156 fused_ordering(657) 00:11:42.156 fused_ordering(658) 00:11:42.156 fused_ordering(659) 00:11:42.156 fused_ordering(660) 00:11:42.156 fused_ordering(661) 00:11:42.156 fused_ordering(662) 00:11:42.156 fused_ordering(663) 00:11:42.156 fused_ordering(664) 00:11:42.156 fused_ordering(665) 00:11:42.156 fused_ordering(666) 00:11:42.156 fused_ordering(667) 00:11:42.156 fused_ordering(668) 00:11:42.156 fused_ordering(669) 00:11:42.156 fused_ordering(670) 00:11:42.156 fused_ordering(671) 00:11:42.156 fused_ordering(672) 00:11:42.156 fused_ordering(673) 00:11:42.156 fused_ordering(674) 00:11:42.156 fused_ordering(675) 00:11:42.156 fused_ordering(676) 00:11:42.156 fused_ordering(677) 00:11:42.156 fused_ordering(678) 00:11:42.156 fused_ordering(679) 00:11:42.156 fused_ordering(680) 00:11:42.156 fused_ordering(681) 00:11:42.156 fused_ordering(682) 00:11:42.156 fused_ordering(683) 00:11:42.156 fused_ordering(684) 00:11:42.156 fused_ordering(685) 00:11:42.156 fused_ordering(686) 00:11:42.156 fused_ordering(687) 00:11:42.156 fused_ordering(688) 00:11:42.156 fused_ordering(689) 00:11:42.156 fused_ordering(690) 00:11:42.156 fused_ordering(691) 00:11:42.156 fused_ordering(692) 00:11:42.156 fused_ordering(693) 00:11:42.156 fused_ordering(694) 00:11:42.156 fused_ordering(695) 00:11:42.156 fused_ordering(696) 00:11:42.156 fused_ordering(697) 00:11:42.156 fused_ordering(698) 00:11:42.156 fused_ordering(699) 00:11:42.156 fused_ordering(700) 00:11:42.156 fused_ordering(701) 00:11:42.156 fused_ordering(702) 00:11:42.156 fused_ordering(703) 00:11:42.156 fused_ordering(704) 00:11:42.156 fused_ordering(705) 00:11:42.156 fused_ordering(706) 00:11:42.156 fused_ordering(707) 00:11:42.156 fused_ordering(708) 00:11:42.156 fused_ordering(709) 00:11:42.156 fused_ordering(710) 00:11:42.156 fused_ordering(711) 00:11:42.156 fused_ordering(712) 00:11:42.156 fused_ordering(713) 00:11:42.156 fused_ordering(714) 00:11:42.156 fused_ordering(715) 00:11:42.156 fused_ordering(716) 00:11:42.156 fused_ordering(717) 00:11:42.156 fused_ordering(718) 00:11:42.156 fused_ordering(719) 00:11:42.156 fused_ordering(720) 00:11:42.156 fused_ordering(721) 00:11:42.156 fused_ordering(722) 00:11:42.156 fused_ordering(723) 00:11:42.156 fused_ordering(724) 00:11:42.156 fused_ordering(725) 00:11:42.156 fused_ordering(726) 00:11:42.156 fused_ordering(727) 00:11:42.156 fused_ordering(728) 00:11:42.156 fused_ordering(729) 00:11:42.156 fused_ordering(730) 00:11:42.156 fused_ordering(731) 00:11:42.156 fused_ordering(732) 00:11:42.156 fused_ordering(733) 00:11:42.156 fused_ordering(734) 00:11:42.156 fused_ordering(735) 00:11:42.156 fused_ordering(736) 00:11:42.156 fused_ordering(737) 00:11:42.156 fused_ordering(738) 00:11:42.156 fused_ordering(739) 00:11:42.156 fused_ordering(740) 00:11:42.156 fused_ordering(741) 00:11:42.156 fused_ordering(742) 00:11:42.156 fused_ordering(743) 00:11:42.156 fused_ordering(744) 00:11:42.156 fused_ordering(745) 00:11:42.156 fused_ordering(746) 00:11:42.156 fused_ordering(747) 00:11:42.156 fused_ordering(748) 00:11:42.156 fused_ordering(749) 00:11:42.156 fused_ordering(750) 00:11:42.156 fused_ordering(751) 00:11:42.156 fused_ordering(752) 00:11:42.156 fused_ordering(753) 00:11:42.156 fused_ordering(754) 00:11:42.156 fused_ordering(755) 00:11:42.156 fused_ordering(756) 00:11:42.156 fused_ordering(757) 00:11:42.156 fused_ordering(758) 00:11:42.156 fused_ordering(759) 00:11:42.156 fused_ordering(760) 00:11:42.156 fused_ordering(761) 00:11:42.156 fused_ordering(762) 00:11:42.156 fused_ordering(763) 00:11:42.156 fused_ordering(764) 00:11:42.156 fused_ordering(765) 00:11:42.156 fused_ordering(766) 00:11:42.156 fused_ordering(767) 00:11:42.156 fused_ordering(768) 00:11:42.156 fused_ordering(769) 00:11:42.156 fused_ordering(770) 00:11:42.156 fused_ordering(771) 00:11:42.156 fused_ordering(772) 00:11:42.156 fused_ordering(773) 00:11:42.156 fused_ordering(774) 00:11:42.157 fused_ordering(775) 00:11:42.157 fused_ordering(776) 00:11:42.157 fused_ordering(777) 00:11:42.157 fused_ordering(778) 00:11:42.157 fused_ordering(779) 00:11:42.157 fused_ordering(780) 00:11:42.157 fused_ordering(781) 00:11:42.157 fused_ordering(782) 00:11:42.157 fused_ordering(783) 00:11:42.157 fused_ordering(784) 00:11:42.157 fused_ordering(785) 00:11:42.157 fused_ordering(786) 00:11:42.157 fused_ordering(787) 00:11:42.157 fused_ordering(788) 00:11:42.157 fused_ordering(789) 00:11:42.157 fused_ordering(790) 00:11:42.157 fused_ordering(791) 00:11:42.157 fused_ordering(792) 00:11:42.157 fused_ordering(793) 00:11:42.157 fused_ordering(794) 00:11:42.157 fused_ordering(795) 00:11:42.157 fused_ordering(796) 00:11:42.157 fused_ordering(797) 00:11:42.157 fused_ordering(798) 00:11:42.157 fused_ordering(799) 00:11:42.157 fused_ordering(800) 00:11:42.157 fused_ordering(801) 00:11:42.157 fused_ordering(802) 00:11:42.157 fused_ordering(803) 00:11:42.157 fused_ordering(804) 00:11:42.157 fused_ordering(805) 00:11:42.157 fused_ordering(806) 00:11:42.157 fused_ordering(807) 00:11:42.157 fused_ordering(808) 00:11:42.157 fused_ordering(809) 00:11:42.157 fused_ordering(810) 00:11:42.157 fused_ordering(811) 00:11:42.157 fused_ordering(812) 00:11:42.157 fused_ordering(813) 00:11:42.157 fused_ordering(814) 00:11:42.157 fused_ordering(815) 00:11:42.157 fused_ordering(816) 00:11:42.157 fused_ordering(817) 00:11:42.157 fused_ordering(818) 00:11:42.157 fused_ordering(819) 00:11:42.157 fused_ordering(820) 00:11:42.722 fused_ordering(821) 00:11:42.722 fused_ordering(822) 00:11:42.722 fused_ordering(823) 00:11:42.722 fused_ordering(824) 00:11:42.722 fused_ordering(825) 00:11:42.722 fused_ordering(826) 00:11:42.722 fused_ordering(827) 00:11:42.722 fused_ordering(828) 00:11:42.722 fused_ordering(829) 00:11:42.722 fused_ordering(830) 00:11:42.722 fused_ordering(831) 00:11:42.722 fused_ordering(832) 00:11:42.722 fused_ordering(833) 00:11:42.722 fused_ordering(834) 00:11:42.722 fused_ordering(835) 00:11:42.722 fused_ordering(836) 00:11:42.722 fused_ordering(837) 00:11:42.722 fused_ordering(838) 00:11:42.722 fused_ordering(839) 00:11:42.722 fused_ordering(840) 00:11:42.722 fused_ordering(841) 00:11:42.722 fused_ordering(842) 00:11:42.722 fused_ordering(843) 00:11:42.722 fused_ordering(844) 00:11:42.722 fused_ordering(845) 00:11:42.722 fused_ordering(846) 00:11:42.722 fused_ordering(847) 00:11:42.722 fused_ordering(848) 00:11:42.722 fused_ordering(849) 00:11:42.722 fused_ordering(850) 00:11:42.722 fused_ordering(851) 00:11:42.722 fused_ordering(852) 00:11:42.722 fused_ordering(853) 00:11:42.722 fused_ordering(854) 00:11:42.722 fused_ordering(855) 00:11:42.722 fused_ordering(856) 00:11:42.722 fused_ordering(857) 00:11:42.722 fused_ordering(858) 00:11:42.722 fused_ordering(859) 00:11:42.722 fused_ordering(860) 00:11:42.722 fused_ordering(861) 00:11:42.722 fused_ordering(862) 00:11:42.722 fused_ordering(863) 00:11:42.722 fused_ordering(864) 00:11:42.722 fused_ordering(865) 00:11:42.722 fused_ordering(866) 00:11:42.722 fused_ordering(867) 00:11:42.722 fused_ordering(868) 00:11:42.722 fused_ordering(869) 00:11:42.722 fused_ordering(870) 00:11:42.722 fused_ordering(871) 00:11:42.723 fused_ordering(872) 00:11:42.723 fused_ordering(873) 00:11:42.723 fused_ordering(874) 00:11:42.723 fused_ordering(875) 00:11:42.723 fused_ordering(876) 00:11:42.723 fused_ordering(877) 00:11:42.723 fused_ordering(878) 00:11:42.723 fused_ordering(879) 00:11:42.723 fused_ordering(880) 00:11:42.723 fused_ordering(881) 00:11:42.723 fused_ordering(882) 00:11:42.723 fused_ordering(883) 00:11:42.723 fused_ordering(884) 00:11:42.723 fused_ordering(885) 00:11:42.723 fused_ordering(886) 00:11:42.723 fused_ordering(887) 00:11:42.723 fused_ordering(888) 00:11:42.723 fused_ordering(889) 00:11:42.723 fused_ordering(890) 00:11:42.723 fused_ordering(891) 00:11:42.723 fused_ordering(892) 00:11:42.723 fused_ordering(893) 00:11:42.723 fused_ordering(894) 00:11:42.723 fused_ordering(895) 00:11:42.723 fused_ordering(896) 00:11:42.723 fused_ordering(897) 00:11:42.723 fused_ordering(898) 00:11:42.723 fused_ordering(899) 00:11:42.723 fused_ordering(900) 00:11:42.723 fused_ordering(901) 00:11:42.723 fused_ordering(902) 00:11:42.723 fused_ordering(903) 00:11:42.723 fused_ordering(904) 00:11:42.723 fused_ordering(905) 00:11:42.723 fused_ordering(906) 00:11:42.723 fused_ordering(907) 00:11:42.723 fused_ordering(908) 00:11:42.723 fused_ordering(909) 00:11:42.723 fused_ordering(910) 00:11:42.723 fused_ordering(911) 00:11:42.723 fused_ordering(912) 00:11:42.723 fused_ordering(913) 00:11:42.723 fused_ordering(914) 00:11:42.723 fused_ordering(915) 00:11:42.723 fused_ordering(916) 00:11:42.723 fused_ordering(917) 00:11:42.723 fused_ordering(918) 00:11:42.723 fused_ordering(919) 00:11:42.723 fused_ordering(920) 00:11:42.723 fused_ordering(921) 00:11:42.723 fused_ordering(922) 00:11:42.723 fused_ordering(923) 00:11:42.723 fused_ordering(924) 00:11:42.723 fused_ordering(925) 00:11:42.723 fused_ordering(926) 00:11:42.723 fused_ordering(927) 00:11:42.723 fused_ordering(928) 00:11:42.723 fused_ordering(929) 00:11:42.723 fused_ordering(930) 00:11:42.723 fused_ordering(931) 00:11:42.723 fused_ordering(932) 00:11:42.723 fused_ordering(933) 00:11:42.723 fused_ordering(934) 00:11:42.723 fused_ordering(935) 00:11:42.723 fused_ordering(936) 00:11:42.723 fused_ordering(937) 00:11:42.723 fused_ordering(938) 00:11:42.723 fused_ordering(939) 00:11:42.723 fused_ordering(940) 00:11:42.723 fused_ordering(941) 00:11:42.723 fused_ordering(942) 00:11:42.723 fused_ordering(943) 00:11:42.723 fused_ordering(944) 00:11:42.723 fused_ordering(945) 00:11:42.723 fused_ordering(946) 00:11:42.723 fused_ordering(947) 00:11:42.723 fused_ordering(948) 00:11:42.723 fused_ordering(949) 00:11:42.723 fused_ordering(950) 00:11:42.723 fused_ordering(951) 00:11:42.723 fused_ordering(952) 00:11:42.723 fused_ordering(953) 00:11:42.723 fused_ordering(954) 00:11:42.723 fused_ordering(955) 00:11:42.723 fused_ordering(956) 00:11:42.723 fused_ordering(957) 00:11:42.723 fused_ordering(958) 00:11:42.723 fused_ordering(959) 00:11:42.723 fused_ordering(960) 00:11:42.723 fused_ordering(961) 00:11:42.723 fused_ordering(962) 00:11:42.723 fused_ordering(963) 00:11:42.723 fused_ordering(964) 00:11:42.723 fused_ordering(965) 00:11:42.723 fused_ordering(966) 00:11:42.723 fused_ordering(967) 00:11:42.723 fused_ordering(968) 00:11:42.723 fused_ordering(969) 00:11:42.723 fused_ordering(970) 00:11:42.723 fused_ordering(971) 00:11:42.723 fused_ordering(972) 00:11:42.723 fused_ordering(973) 00:11:42.723 fused_ordering(974) 00:11:42.723 fused_ordering(975) 00:11:42.723 fused_ordering(976) 00:11:42.723 fused_ordering(977) 00:11:42.723 fused_ordering(978) 00:11:42.723 fused_ordering(979) 00:11:42.723 fused_ordering(980) 00:11:42.723 fused_ordering(981) 00:11:42.723 fused_ordering(982) 00:11:42.723 fused_ordering(983) 00:11:42.723 fused_ordering(984) 00:11:42.723 fused_ordering(985) 00:11:42.723 fused_ordering(986) 00:11:42.723 fused_ordering(987) 00:11:42.723 fused_ordering(988) 00:11:42.723 fused_ordering(989) 00:11:42.723 fused_ordering(990) 00:11:42.723 fused_ordering(991) 00:11:42.723 fused_ordering(992) 00:11:42.723 fused_ordering(993) 00:11:42.723 fused_ordering(994) 00:11:42.723 fused_ordering(995) 00:11:42.723 fused_ordering(996) 00:11:42.723 fused_ordering(997) 00:11:42.723 fused_ordering(998) 00:11:42.723 fused_ordering(999) 00:11:42.723 fused_ordering(1000) 00:11:42.723 fused_ordering(1001) 00:11:42.723 fused_ordering(1002) 00:11:42.723 fused_ordering(1003) 00:11:42.723 fused_ordering(1004) 00:11:42.723 fused_ordering(1005) 00:11:42.723 fused_ordering(1006) 00:11:42.723 fused_ordering(1007) 00:11:42.723 fused_ordering(1008) 00:11:42.723 fused_ordering(1009) 00:11:42.723 fused_ordering(1010) 00:11:42.723 fused_ordering(1011) 00:11:42.723 fused_ordering(1012) 00:11:42.723 fused_ordering(1013) 00:11:42.723 fused_ordering(1014) 00:11:42.723 fused_ordering(1015) 00:11:42.723 fused_ordering(1016) 00:11:42.723 fused_ordering(1017) 00:11:42.723 fused_ordering(1018) 00:11:42.723 fused_ordering(1019) 00:11:42.723 fused_ordering(1020) 00:11:42.723 fused_ordering(1021) 00:11:42.723 fused_ordering(1022) 00:11:42.723 fused_ordering(1023) 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.723 rmmod nvme_tcp 00:11:42.723 rmmod nvme_fabrics 00:11:42.723 rmmod nvme_keyring 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 75182 ']' 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 75182 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 75182 ']' 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 75182 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75182 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:42.723 killing process with pid 75182 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75182' 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 75182 00:11:42.723 17:11:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 75182 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:42.980 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:11:43.237 00:11:43.237 real 0m3.697s 00:11:43.237 user 0m4.181s 00:11:43.237 sys 0m1.379s 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:43.237 ************************************ 00:11:43.237 END TEST nvmf_fused_ordering 00:11:43.237 ************************************ 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.237 17:11:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.238 ************************************ 00:11:43.238 START TEST nvmf_ns_masking 00:11:43.238 ************************************ 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:43.238 * Looking for test storage... 00:11:43.238 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.238 --rc genhtml_branch_coverage=1 00:11:43.238 --rc genhtml_function_coverage=1 00:11:43.238 --rc genhtml_legend=1 00:11:43.238 --rc geninfo_all_blocks=1 00:11:43.238 --rc geninfo_unexecuted_blocks=1 00:11:43.238 00:11:43.238 ' 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.238 --rc genhtml_branch_coverage=1 00:11:43.238 --rc genhtml_function_coverage=1 00:11:43.238 --rc genhtml_legend=1 00:11:43.238 --rc geninfo_all_blocks=1 00:11:43.238 --rc geninfo_unexecuted_blocks=1 00:11:43.238 00:11:43.238 ' 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.238 --rc genhtml_branch_coverage=1 00:11:43.238 --rc genhtml_function_coverage=1 00:11:43.238 --rc genhtml_legend=1 00:11:43.238 --rc geninfo_all_blocks=1 00:11:43.238 --rc geninfo_unexecuted_blocks=1 00:11:43.238 00:11:43.238 ' 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:43.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.238 --rc genhtml_branch_coverage=1 00:11:43.238 --rc genhtml_function_coverage=1 00:11:43.238 --rc genhtml_legend=1 00:11:43.238 --rc geninfo_all_blocks=1 00:11:43.238 --rc geninfo_unexecuted_blocks=1 00:11:43.238 00:11:43.238 ' 00:11:43.238 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.495 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=cb001b83-ba43-4915-b148-d2151a148a86 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a1e1488b-907e-4278-807d-f13a8d235e9b 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=30f1e6fb-1718-44de-9222-5605d2c89c1b 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:43.495 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:43.496 Cannot find device "nvmf_init_br" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:43.496 Cannot find device "nvmf_init_br2" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:43.496 Cannot find device "nvmf_tgt_br" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.496 Cannot find device "nvmf_tgt_br2" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:43.496 Cannot find device "nvmf_init_br" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:43.496 Cannot find device "nvmf_init_br2" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:43.496 Cannot find device "nvmf_tgt_br" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:43.496 Cannot find device "nvmf_tgt_br2" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:43.496 Cannot find device "nvmf_br" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:43.496 Cannot find device "nvmf_init_if" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:43.496 Cannot find device "nvmf_init_if2" 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.496 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.496 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.753 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:43.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:43.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:11:43.754 00:11:43.754 --- 10.0.0.3 ping statistics --- 00:11:43.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.754 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:43.754 17:11:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:43.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:43.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:11:43.754 00:11:43.754 --- 10.0.0.4 ping statistics --- 00:11:43.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.754 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:43.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:11:43.754 00:11:43.754 --- 10.0.0.1 ping statistics --- 00:11:43.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.754 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:43.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:11:43.754 00:11:43.754 --- 10.0.0.2 ping statistics --- 00:11:43.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.754 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75464 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75464 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75464 ']' 00:11:43.754 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.011 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.011 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.011 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.011 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.011 [2024-12-06 17:11:26.104030] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:11:44.011 [2024-12-06 17:11:26.104119] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.011 [2024-12-06 17:11:26.247817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.011 [2024-12-06 17:11:26.294716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.011 [2024-12-06 17:11:26.294785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.011 [2024-12-06 17:11:26.294807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.011 [2024-12-06 17:11:26.294821] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.011 [2024-12-06 17:11:26.294833] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.011 [2024-12-06 17:11:26.295217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.268 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.268 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:44.268 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:44.268 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:44.268 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.268 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.268 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:44.525 [2024-12-06 17:11:26.667907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.525 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:44.525 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:44.525 17:11:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:44.782 Malloc1 00:11:44.782 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:45.041 Malloc2 00:11:45.041 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.609 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:45.868 17:11:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:45.868 [2024-12-06 17:11:28.156981] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:46.126 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:46.126 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 30f1e6fb-1718-44de-9222-5605d2c89c1b -a 10.0.0.3 -s 4420 -i 4 00:11:46.126 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.126 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:46.126 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.126 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:46.126 17:11:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:48.027 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:48.027 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.027 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:48.027 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:48.027 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.027 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:48.027 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:48.027 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:48.285 [ 0]:0x1 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa3498734eb0450dbe6e2c6fd21b2327 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa3498734eb0450dbe6e2c6fd21b2327 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.285 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:48.543 [ 0]:0x1 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa3498734eb0450dbe6e2c6fd21b2327 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa3498734eb0450dbe6e2c6fd21b2327 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.543 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:48.801 [ 1]:0x2 00:11:48.801 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.801 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.801 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59eb5e6eb8ca4426ae9d8c58159165cb 00:11:48.801 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59eb5e6eb8ca4426ae9d8c58159165cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.801 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:48.801 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.801 17:11:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.059 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:49.317 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:49.317 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 30f1e6fb-1718-44de-9222-5605d2c89c1b -a 10.0.0.3 -s 4420 -i 4 00:11:49.574 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:49.574 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:49.574 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.574 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:49.574 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:49.574 17:11:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:51.474 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:51.474 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:51.474 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.475 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.733 [ 0]:0x2 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59eb5e6eb8ca4426ae9d8c58159165cb 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59eb5e6eb8ca4426ae9d8c58159165cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.733 17:11:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.992 [ 0]:0x1 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa3498734eb0450dbe6e2c6fd21b2327 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa3498734eb0450dbe6e2c6fd21b2327 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.992 [ 1]:0x2 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59eb5e6eb8ca4426ae9d8c58159165cb 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59eb5e6eb8ca4426ae9d8c58159165cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.992 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:52.559 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:52.560 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:52.560 [ 0]:0x2 00:11:52.560 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:52.560 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:52.560 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59eb5e6eb8ca4426ae9d8c58159165cb 00:11:52.560 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59eb5e6eb8ca4426ae9d8c58159165cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:52.560 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:52.560 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.560 17:11:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:52.818 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:52.818 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 30f1e6fb-1718-44de-9222-5605d2c89c1b -a 10.0.0.3 -s 4420 -i 4 00:11:53.075 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:53.075 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.075 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.075 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:53.075 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:53.075 17:11:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.973 [ 0]:0x1 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.973 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fa3498734eb0450dbe6e2c6fd21b2327 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fa3498734eb0450dbe6e2c6fd21b2327 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:55.231 [ 1]:0x2 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59eb5e6eb8ca4426ae9d8c58159165cb 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59eb5e6eb8ca4426ae9d8c58159165cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.231 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:55.489 [ 0]:0x2 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59eb5e6eb8ca4426ae9d8c58159165cb 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59eb5e6eb8ca4426ae9d8c58159165cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.489 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.490 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.747 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.747 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.747 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.747 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:55.747 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:55.747 17:11:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:56.006 [2024-12-06 17:11:38.079389] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:56.006 2024/12/06 17:11:38 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:11:56.006 request: 00:11:56.006 { 00:11:56.006 "method": "nvmf_ns_remove_host", 00:11:56.006 "params": { 00:11:56.006 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:56.006 "nsid": 2, 00:11:56.006 "host": "nqn.2016-06.io.spdk:host1" 00:11:56.006 } 00:11:56.006 } 00:11:56.006 Got JSON-RPC error response 00:11:56.006 GoRPCClient: error on JSON-RPC call 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:56.006 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:56.007 [ 0]:0x2 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=59eb5e6eb8ca4426ae9d8c58159165cb 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 59eb5e6eb8ca4426ae9d8c58159165cb != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=75836 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 75836 /var/tmp/host.sock 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75836 ']' 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.007 17:11:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:56.266 [2024-12-06 17:11:38.350954] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:11:56.266 [2024-12-06 17:11:38.351331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75836 ] 00:11:56.266 [2024-12-06 17:11:38.502083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.266 [2024-12-06 17:11:38.541970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.229 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.229 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:57.229 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.488 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.747 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid cb001b83-ba43-4915-b148-d2151a148a86 00:11:57.747 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:57.747 17:11:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CB001B83BA434915B148D2151A148A86 -i 00:11:58.007 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a1e1488b-907e-4278-807d-f13a8d235e9b 00:11:58.007 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:58.007 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A1E1488B907E4278807DF13A8D235E9B -i 00:11:58.266 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:58.525 17:11:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:59.092 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.092 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:59.350 nvme0n1 00:11:59.350 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:59.350 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:59.610 nvme1n2 00:11:59.610 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:59.610 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:59.610 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:59.610 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:59.610 17:11:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:00.178 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:00.178 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:00.178 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:00.178 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:00.178 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ cb001b83-ba43-4915-b148-d2151a148a86 == \c\b\0\0\1\b\8\3\-\b\a\4\3\-\4\9\1\5\-\b\1\4\8\-\d\2\1\5\1\a\1\4\8\a\8\6 ]] 00:12:00.178 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:00.178 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:00.178 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:00.745 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a1e1488b-907e-4278-807d-f13a8d235e9b == \a\1\e\1\4\8\8\b\-\9\0\7\e\-\4\2\7\8\-\8\0\7\d\-\f\1\3\a\8\d\2\3\5\e\9\b ]] 00:12:00.745 17:11:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.004 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid cb001b83-ba43-4915-b148-d2151a148a86 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CB001B83BA434915B148D2151A148A86 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CB001B83BA434915B148D2151A148A86 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:01.263 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g CB001B83BA434915B148D2151A148A86 00:12:01.522 [2024-12-06 17:11:43.657805] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:01.522 [2024-12-06 17:11:43.657860] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:01.522 [2024-12-06 17:11:43.657873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.522 2024/12/06 17:11:43 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid hide_metadata:%!s(bool=false) nguid:CB001B83BA434915B148D2151A148A86 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:12:01.522 request: 00:12:01.522 { 00:12:01.522 "method": "nvmf_subsystem_add_ns", 00:12:01.522 "params": { 00:12:01.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.522 "namespace": { 00:12:01.522 "bdev_name": "invalid", 00:12:01.522 "nsid": 1, 00:12:01.522 "nguid": "CB001B83BA434915B148D2151A148A86", 00:12:01.522 "no_auto_visible": false, 00:12:01.522 "hide_metadata": false 00:12:01.522 } 00:12:01.522 } 00:12:01.522 } 00:12:01.522 Got JSON-RPC error response 00:12:01.522 GoRPCClient: error on JSON-RPC call 00:12:01.522 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:01.522 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.522 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.522 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.522 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid cb001b83-ba43-4915-b148-d2151a148a86 00:12:01.522 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:01.523 17:11:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g CB001B83BA434915B148D2151A148A86 -i 00:12:01.782 17:11:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 75836 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75836 ']' 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75836 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75836 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75836' 00:12:04.308 killing process with pid 75836 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75836 00:12:04.308 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75836 00:12:04.565 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.822 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:04.822 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:04.822 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.822 17:11:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:04.822 rmmod nvme_tcp 00:12:04.822 rmmod nvme_fabrics 00:12:04.822 rmmod nvme_keyring 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75464 ']' 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75464 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75464 ']' 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75464 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.822 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75464 00:12:05.079 killing process with pid 75464 00:12:05.079 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.079 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.079 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75464' 00:12:05.079 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75464 00:12:05.079 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75464 00:12:05.079 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.079 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.079 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:05.080 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:12:05.337 00:12:05.337 real 0m22.173s 00:12:05.337 user 0m38.624s 00:12:05.337 sys 0m3.032s 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.337 ************************************ 00:12:05.337 END TEST nvmf_ns_masking 00:12:05.337 ************************************ 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.337 ************************************ 00:12:05.337 START TEST nvmf_auth_target 00:12:05.337 ************************************ 00:12:05.337 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:12:05.595 * Looking for test storage... 00:12:05.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:05.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.595 --rc genhtml_branch_coverage=1 00:12:05.595 --rc genhtml_function_coverage=1 00:12:05.595 --rc genhtml_legend=1 00:12:05.595 --rc geninfo_all_blocks=1 00:12:05.595 --rc geninfo_unexecuted_blocks=1 00:12:05.595 00:12:05.595 ' 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:05.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.595 --rc genhtml_branch_coverage=1 00:12:05.595 --rc genhtml_function_coverage=1 00:12:05.595 --rc genhtml_legend=1 00:12:05.595 --rc geninfo_all_blocks=1 00:12:05.595 --rc geninfo_unexecuted_blocks=1 00:12:05.595 00:12:05.595 ' 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:05.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.595 --rc genhtml_branch_coverage=1 00:12:05.595 --rc genhtml_function_coverage=1 00:12:05.595 --rc genhtml_legend=1 00:12:05.595 --rc geninfo_all_blocks=1 00:12:05.595 --rc geninfo_unexecuted_blocks=1 00:12:05.595 00:12:05.595 ' 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:05.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.595 --rc genhtml_branch_coverage=1 00:12:05.595 --rc genhtml_function_coverage=1 00:12:05.595 --rc genhtml_legend=1 00:12:05.595 --rc geninfo_all_blocks=1 00:12:05.595 --rc geninfo_unexecuted_blocks=1 00:12:05.595 00:12:05.595 ' 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.595 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.596 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:05.596 Cannot find device "nvmf_init_br" 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:05.596 Cannot find device "nvmf_init_br2" 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:05.596 Cannot find device "nvmf_tgt_br" 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:05.596 Cannot find device "nvmf_tgt_br2" 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:05.596 Cannot find device "nvmf_init_br" 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:05.596 Cannot find device "nvmf_init_br2" 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:12:05.596 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:05.860 Cannot find device "nvmf_tgt_br" 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:05.861 Cannot find device "nvmf_tgt_br2" 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:05.861 Cannot find device "nvmf_br" 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:05.861 Cannot find device "nvmf_init_if" 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:05.861 Cannot find device "nvmf_init_if2" 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:05.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:05.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:05.861 17:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:05.861 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:06.118 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:06.118 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:06.118 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:06.118 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:06.118 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:06.118 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:06.118 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:06.118 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:06.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:12:06.118 00:12:06.118 --- 10.0.0.3 ping statistics --- 00:12:06.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.118 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:06.118 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:06.118 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:06.118 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:12:06.118 00:12:06.118 --- 10.0.0.4 ping statistics --- 00:12:06.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.118 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:06.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:12:06.119 00:12:06.119 --- 10.0.0.1 ping statistics --- 00:12:06.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.119 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:06.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:06.119 00:12:06.119 --- 10.0.0.2 ping statistics --- 00:12:06.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.119 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=76337 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 76337 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76337 ']' 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.119 17:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=76381 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=259547d47d2b3f77b1587ae5d65c03740100a65944cc69a5 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kTO 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 259547d47d2b3f77b1587ae5d65c03740100a65944cc69a5 0 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 259547d47d2b3f77b1587ae5d65c03740100a65944cc69a5 0 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=259547d47d2b3f77b1587ae5d65c03740100a65944cc69a5 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:07.052 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kTO 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kTO 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.kTO 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8d7c1580a7f7611079c4a393b90c364be85b4bf4a9df5a8d18c68c95d0c15798 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.eVb 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8d7c1580a7f7611079c4a393b90c364be85b4bf4a9df5a8d18c68c95d0c15798 3 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8d7c1580a7f7611079c4a393b90c364be85b4bf4a9df5a8d18c68c95d0c15798 3 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8d7c1580a7f7611079c4a393b90c364be85b4bf4a9df5a8d18c68c95d0c15798 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.eVb 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.eVb 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.eVb 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0ab35818760a5174c613b399efa71633 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.HFJ 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0ab35818760a5174c613b399efa71633 1 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0ab35818760a5174c613b399efa71633 1 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0ab35818760a5174c613b399efa71633 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.HFJ 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.HFJ 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.HFJ 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=709fc02a1b44609b8ffcc3be60a803c32597286fc7b5bc9a 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.B3h 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 709fc02a1b44609b8ffcc3be60a803c32597286fc7b5bc9a 2 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 709fc02a1b44609b8ffcc3be60a803c32597286fc7b5bc9a 2 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=709fc02a1b44609b8ffcc3be60a803c32597286fc7b5bc9a 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.B3h 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.B3h 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.B3h 00:12:07.312 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3e9257bc5e2ad0fdfc1807b07de38b4530eb630a5c5a512f 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bcd 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3e9257bc5e2ad0fdfc1807b07de38b4530eb630a5c5a512f 2 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3e9257bc5e2ad0fdfc1807b07de38b4530eb630a5c5a512f 2 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3e9257bc5e2ad0fdfc1807b07de38b4530eb630a5c5a512f 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:07.313 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bcd 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bcd 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Bcd 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bb9433536d409fa86f23db12039b5bd8 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.BHX 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bb9433536d409fa86f23db12039b5bd8 1 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bb9433536d409fa86f23db12039b5bd8 1 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.571 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bb9433536d409fa86f23db12039b5bd8 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.BHX 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.BHX 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.BHX 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f3f1220382bb96ebdb79bb9b86671b298440e8cd4b116b458042370b3b7dcd55 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8Yv 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f3f1220382bb96ebdb79bb9b86671b298440e8cd4b116b458042370b3b7dcd55 3 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f3f1220382bb96ebdb79bb9b86671b298440e8cd4b116b458042370b3b7dcd55 3 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f3f1220382bb96ebdb79bb9b86671b298440e8cd4b116b458042370b3b7dcd55 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8Yv 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8Yv 00:12:07.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8Yv 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 76337 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76337 ']' 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.572 17:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:07.830 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.830 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:07.830 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 76381 /var/tmp/host.sock 00:12:07.830 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 76381 ']' 00:12:07.830 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:07.831 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.831 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:07.831 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.831 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.397 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kTO 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kTO 00:12:08.398 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kTO 00:12:08.656 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.eVb ]] 00:12:08.656 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eVb 00:12:08.656 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.656 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.656 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.656 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eVb 00:12:08.656 17:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eVb 00:12:08.914 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:08.914 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HFJ 00:12:08.914 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.914 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.914 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.914 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.HFJ 00:12:08.914 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.HFJ 00:12:09.177 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.B3h ]] 00:12:09.177 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B3h 00:12:09.177 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.177 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.177 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.177 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B3h 00:12:09.177 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B3h 00:12:09.493 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:09.493 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Bcd 00:12:09.493 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.493 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.493 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.493 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Bcd 00:12:09.493 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Bcd 00:12:09.752 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.BHX ]] 00:12:09.752 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BHX 00:12:09.752 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.752 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.752 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.752 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BHX 00:12:09.752 17:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BHX 00:12:10.010 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:10.010 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8Yv 00:12:10.010 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.010 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.010 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.010 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8Yv 00:12:10.010 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8Yv 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:10.578 17:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:11.146 00:12:11.146 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.146 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.146 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.404 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.404 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.404 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.404 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.404 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.404 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.404 { 00:12:11.404 "auth": { 00:12:11.404 "dhgroup": "null", 00:12:11.404 "digest": "sha256", 00:12:11.404 "state": "completed" 00:12:11.404 }, 00:12:11.404 "cntlid": 1, 00:12:11.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:11.404 "listen_address": { 00:12:11.404 "adrfam": "IPv4", 00:12:11.404 "traddr": "10.0.0.3", 00:12:11.404 "trsvcid": "4420", 00:12:11.404 "trtype": "TCP" 00:12:11.404 }, 00:12:11.404 "peer_address": { 00:12:11.404 "adrfam": "IPv4", 00:12:11.404 "traddr": "10.0.0.1", 00:12:11.404 "trsvcid": "49186", 00:12:11.404 "trtype": "TCP" 00:12:11.404 }, 00:12:11.404 "qid": 0, 00:12:11.404 "state": "enabled", 00:12:11.405 "thread": "nvmf_tgt_poll_group_000" 00:12:11.405 } 00:12:11.405 ]' 00:12:11.405 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.405 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:11.405 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.405 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:11.405 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.663 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.664 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.664 17:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.922 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:11.922 17:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.184 17:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.184 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.184 { 00:12:17.184 "auth": { 00:12:17.184 "dhgroup": "null", 00:12:17.184 "digest": "sha256", 00:12:17.184 "state": "completed" 00:12:17.184 }, 00:12:17.184 "cntlid": 3, 00:12:17.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:17.184 "listen_address": { 00:12:17.184 "adrfam": "IPv4", 00:12:17.184 "traddr": "10.0.0.3", 00:12:17.184 "trsvcid": "4420", 00:12:17.184 "trtype": "TCP" 00:12:17.184 }, 00:12:17.184 "peer_address": { 00:12:17.184 "adrfam": "IPv4", 00:12:17.184 "traddr": "10.0.0.1", 00:12:17.184 "trsvcid": "39556", 00:12:17.184 "trtype": "TCP" 00:12:17.184 }, 00:12:17.184 "qid": 0, 00:12:17.184 "state": "enabled", 00:12:17.184 "thread": "nvmf_tgt_poll_group_000" 00:12:17.184 } 00:12:17.184 ]' 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:17.184 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.442 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:17.442 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.442 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.442 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.442 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.700 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:17.700 17:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:18.265 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.522 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:18.522 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.522 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.522 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.522 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.522 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:18.522 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:18.779 17:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.036 00:12:19.295 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.295 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.295 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.554 { 00:12:19.554 "auth": { 00:12:19.554 "dhgroup": "null", 00:12:19.554 "digest": "sha256", 00:12:19.554 "state": "completed" 00:12:19.554 }, 00:12:19.554 "cntlid": 5, 00:12:19.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:19.554 "listen_address": { 00:12:19.554 "adrfam": "IPv4", 00:12:19.554 "traddr": "10.0.0.3", 00:12:19.554 "trsvcid": "4420", 00:12:19.554 "trtype": "TCP" 00:12:19.554 }, 00:12:19.554 "peer_address": { 00:12:19.554 "adrfam": "IPv4", 00:12:19.554 "traddr": "10.0.0.1", 00:12:19.554 "trsvcid": "39572", 00:12:19.554 "trtype": "TCP" 00:12:19.554 }, 00:12:19.554 "qid": 0, 00:12:19.554 "state": "enabled", 00:12:19.554 "thread": "nvmf_tgt_poll_group_000" 00:12:19.554 } 00:12:19.554 ]' 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:19.554 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.811 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.811 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.811 17:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.068 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:20.068 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:20.737 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.737 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:20.737 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.737 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.737 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.737 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.737 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:20.737 17:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:20.994 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.577 00:12:21.577 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.577 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.577 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.834 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.834 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.834 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.834 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.834 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.834 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.834 { 00:12:21.834 "auth": { 00:12:21.834 "dhgroup": "null", 00:12:21.834 "digest": "sha256", 00:12:21.834 "state": "completed" 00:12:21.834 }, 00:12:21.834 "cntlid": 7, 00:12:21.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:21.834 "listen_address": { 00:12:21.834 "adrfam": "IPv4", 00:12:21.834 "traddr": "10.0.0.3", 00:12:21.834 "trsvcid": "4420", 00:12:21.834 "trtype": "TCP" 00:12:21.834 }, 00:12:21.834 "peer_address": { 00:12:21.834 "adrfam": "IPv4", 00:12:21.834 "traddr": "10.0.0.1", 00:12:21.834 "trsvcid": "39604", 00:12:21.834 "trtype": "TCP" 00:12:21.834 }, 00:12:21.834 "qid": 0, 00:12:21.834 "state": "enabled", 00:12:21.834 "thread": "nvmf_tgt_poll_group_000" 00:12:21.834 } 00:12:21.834 ]' 00:12:21.834 17:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.834 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:21.834 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.834 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:21.834 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.834 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.834 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.834 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.091 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:22.091 17:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:23.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:23.022 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.279 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.842 00:12:23.842 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.842 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.842 17:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.100 { 00:12:24.100 "auth": { 00:12:24.100 "dhgroup": "ffdhe2048", 00:12:24.100 "digest": "sha256", 00:12:24.100 "state": "completed" 00:12:24.100 }, 00:12:24.100 "cntlid": 9, 00:12:24.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:24.100 "listen_address": { 00:12:24.100 "adrfam": "IPv4", 00:12:24.100 "traddr": "10.0.0.3", 00:12:24.100 "trsvcid": "4420", 00:12:24.100 "trtype": "TCP" 00:12:24.100 }, 00:12:24.100 "peer_address": { 00:12:24.100 "adrfam": "IPv4", 00:12:24.100 "traddr": "10.0.0.1", 00:12:24.100 "trsvcid": "55202", 00:12:24.100 "trtype": "TCP" 00:12:24.100 }, 00:12:24.100 "qid": 0, 00:12:24.100 "state": "enabled", 00:12:24.100 "thread": "nvmf_tgt_poll_group_000" 00:12:24.100 } 00:12:24.100 ]' 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.100 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.665 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:24.665 17:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:25.231 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.231 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:25.231 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.231 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.231 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.231 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.231 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:25.231 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.798 17:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.056 00:12:26.056 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:26.056 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:26.056 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:26.315 { 00:12:26.315 "auth": { 00:12:26.315 "dhgroup": "ffdhe2048", 00:12:26.315 "digest": "sha256", 00:12:26.315 "state": "completed" 00:12:26.315 }, 00:12:26.315 "cntlid": 11, 00:12:26.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:26.315 "listen_address": { 00:12:26.315 "adrfam": "IPv4", 00:12:26.315 "traddr": "10.0.0.3", 00:12:26.315 "trsvcid": "4420", 00:12:26.315 "trtype": "TCP" 00:12:26.315 }, 00:12:26.315 "peer_address": { 00:12:26.315 "adrfam": "IPv4", 00:12:26.315 "traddr": "10.0.0.1", 00:12:26.315 "trsvcid": "55228", 00:12:26.315 "trtype": "TCP" 00:12:26.315 }, 00:12:26.315 "qid": 0, 00:12:26.315 "state": "enabled", 00:12:26.315 "thread": "nvmf_tgt_poll_group_000" 00:12:26.315 } 00:12:26.315 ]' 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:26.315 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:26.573 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:26.573 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.573 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.573 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.573 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.831 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:26.831 17:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:27.767 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:27.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:27.767 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:27.767 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.767 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.767 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.767 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:27.767 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:27.767 17:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.767 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.333 00:12:28.333 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.333 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:28.333 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:28.591 { 00:12:28.591 "auth": { 00:12:28.591 "dhgroup": "ffdhe2048", 00:12:28.591 "digest": "sha256", 00:12:28.591 "state": "completed" 00:12:28.591 }, 00:12:28.591 "cntlid": 13, 00:12:28.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:28.591 "listen_address": { 00:12:28.591 "adrfam": "IPv4", 00:12:28.591 "traddr": "10.0.0.3", 00:12:28.591 "trsvcid": "4420", 00:12:28.591 "trtype": "TCP" 00:12:28.591 }, 00:12:28.591 "peer_address": { 00:12:28.591 "adrfam": "IPv4", 00:12:28.591 "traddr": "10.0.0.1", 00:12:28.591 "trsvcid": "55266", 00:12:28.591 "trtype": "TCP" 00:12:28.591 }, 00:12:28.591 "qid": 0, 00:12:28.591 "state": "enabled", 00:12:28.591 "thread": "nvmf_tgt_poll_group_000" 00:12:28.591 } 00:12:28.591 ]' 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:28.591 17:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.850 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:28.850 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:29.782 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:29.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:29.782 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:29.782 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.782 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.782 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.782 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:29.782 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:29.782 17:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.053 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.358 00:12:30.358 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.358 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.358 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.614 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.614 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.614 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.614 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.614 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.614 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.614 { 00:12:30.614 "auth": { 00:12:30.614 "dhgroup": "ffdhe2048", 00:12:30.614 "digest": "sha256", 00:12:30.614 "state": "completed" 00:12:30.614 }, 00:12:30.614 "cntlid": 15, 00:12:30.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:30.614 "listen_address": { 00:12:30.614 "adrfam": "IPv4", 00:12:30.614 "traddr": "10.0.0.3", 00:12:30.614 "trsvcid": "4420", 00:12:30.614 "trtype": "TCP" 00:12:30.614 }, 00:12:30.614 "peer_address": { 00:12:30.614 "adrfam": "IPv4", 00:12:30.614 "traddr": "10.0.0.1", 00:12:30.614 "trsvcid": "55284", 00:12:30.614 "trtype": "TCP" 00:12:30.614 }, 00:12:30.614 "qid": 0, 00:12:30.614 "state": "enabled", 00:12:30.614 "thread": "nvmf_tgt_poll_group_000" 00:12:30.614 } 00:12:30.614 ]' 00:12:30.614 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.871 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.871 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.871 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:30.871 17:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.871 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.871 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.871 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.128 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:31.128 17:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:32.064 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.065 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:32.065 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.065 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.065 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.065 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.065 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.065 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:32.065 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.322 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.580 00:12:32.580 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.580 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.580 17:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.143 { 00:12:33.143 "auth": { 00:12:33.143 "dhgroup": "ffdhe3072", 00:12:33.143 "digest": "sha256", 00:12:33.143 "state": "completed" 00:12:33.143 }, 00:12:33.143 "cntlid": 17, 00:12:33.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:33.143 "listen_address": { 00:12:33.143 "adrfam": "IPv4", 00:12:33.143 "traddr": "10.0.0.3", 00:12:33.143 "trsvcid": "4420", 00:12:33.143 "trtype": "TCP" 00:12:33.143 }, 00:12:33.143 "peer_address": { 00:12:33.143 "adrfam": "IPv4", 00:12:33.143 "traddr": "10.0.0.1", 00:12:33.143 "trsvcid": "55302", 00:12:33.143 "trtype": "TCP" 00:12:33.143 }, 00:12:33.143 "qid": 0, 00:12:33.143 "state": "enabled", 00:12:33.143 "thread": "nvmf_tgt_poll_group_000" 00:12:33.143 } 00:12:33.143 ]' 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.143 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:33.144 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.144 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.144 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.144 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.400 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:33.400 17:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:34.334 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.334 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:34.334 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.334 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.334 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.334 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.334 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:34.335 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.592 17:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.850 00:12:34.850 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:34.850 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.850 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.108 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.108 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.108 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.108 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.367 { 00:12:35.367 "auth": { 00:12:35.367 "dhgroup": "ffdhe3072", 00:12:35.367 "digest": "sha256", 00:12:35.367 "state": "completed" 00:12:35.367 }, 00:12:35.367 "cntlid": 19, 00:12:35.367 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:35.367 "listen_address": { 00:12:35.367 "adrfam": "IPv4", 00:12:35.367 "traddr": "10.0.0.3", 00:12:35.367 "trsvcid": "4420", 00:12:35.367 "trtype": "TCP" 00:12:35.367 }, 00:12:35.367 "peer_address": { 00:12:35.367 "adrfam": "IPv4", 00:12:35.367 "traddr": "10.0.0.1", 00:12:35.367 "trsvcid": "49156", 00:12:35.367 "trtype": "TCP" 00:12:35.367 }, 00:12:35.367 "qid": 0, 00:12:35.367 "state": "enabled", 00:12:35.367 "thread": "nvmf_tgt_poll_group_000" 00:12:35.367 } 00:12:35.367 ]' 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.367 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.626 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:35.626 17:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:36.561 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.561 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:36.561 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.561 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.561 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.561 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.561 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:36.561 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.819 17:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.079 00:12:37.079 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.079 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.079 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.337 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.337 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.337 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.337 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.595 { 00:12:37.595 "auth": { 00:12:37.595 "dhgroup": "ffdhe3072", 00:12:37.595 "digest": "sha256", 00:12:37.595 "state": "completed" 00:12:37.595 }, 00:12:37.595 "cntlid": 21, 00:12:37.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:37.595 "listen_address": { 00:12:37.595 "adrfam": "IPv4", 00:12:37.595 "traddr": "10.0.0.3", 00:12:37.595 "trsvcid": "4420", 00:12:37.595 "trtype": "TCP" 00:12:37.595 }, 00:12:37.595 "peer_address": { 00:12:37.595 "adrfam": "IPv4", 00:12:37.595 "traddr": "10.0.0.1", 00:12:37.595 "trsvcid": "49190", 00:12:37.595 "trtype": "TCP" 00:12:37.595 }, 00:12:37.595 "qid": 0, 00:12:37.595 "state": "enabled", 00:12:37.595 "thread": "nvmf_tgt_poll_group_000" 00:12:37.595 } 00:12:37.595 ]' 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.595 17:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.161 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:38.161 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:38.724 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.724 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:38.724 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.724 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.724 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.724 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.724 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:38.724 17:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.289 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.546 00:12:39.546 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.546 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.546 17:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.803 { 00:12:39.803 "auth": { 00:12:39.803 "dhgroup": "ffdhe3072", 00:12:39.803 "digest": "sha256", 00:12:39.803 "state": "completed" 00:12:39.803 }, 00:12:39.803 "cntlid": 23, 00:12:39.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:39.803 "listen_address": { 00:12:39.803 "adrfam": "IPv4", 00:12:39.803 "traddr": "10.0.0.3", 00:12:39.803 "trsvcid": "4420", 00:12:39.803 "trtype": "TCP" 00:12:39.803 }, 00:12:39.803 "peer_address": { 00:12:39.803 "adrfam": "IPv4", 00:12:39.803 "traddr": "10.0.0.1", 00:12:39.803 "trsvcid": "49230", 00:12:39.803 "trtype": "TCP" 00:12:39.803 }, 00:12:39.803 "qid": 0, 00:12:39.803 "state": "enabled", 00:12:39.803 "thread": "nvmf_tgt_poll_group_000" 00:12:39.803 } 00:12:39.803 ]' 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:39.803 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.060 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:40.060 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.060 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.060 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.060 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.317 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:40.317 17:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:41.257 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.515 17:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:41.772 00:12:41.772 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.772 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.773 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.339 { 00:12:42.339 "auth": { 00:12:42.339 "dhgroup": "ffdhe4096", 00:12:42.339 "digest": "sha256", 00:12:42.339 "state": "completed" 00:12:42.339 }, 00:12:42.339 "cntlid": 25, 00:12:42.339 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:42.339 "listen_address": { 00:12:42.339 "adrfam": "IPv4", 00:12:42.339 "traddr": "10.0.0.3", 00:12:42.339 "trsvcid": "4420", 00:12:42.339 "trtype": "TCP" 00:12:42.339 }, 00:12:42.339 "peer_address": { 00:12:42.339 "adrfam": "IPv4", 00:12:42.339 "traddr": "10.0.0.1", 00:12:42.339 "trsvcid": "49266", 00:12:42.339 "trtype": "TCP" 00:12:42.339 }, 00:12:42.339 "qid": 0, 00:12:42.339 "state": "enabled", 00:12:42.339 "thread": "nvmf_tgt_poll_group_000" 00:12:42.339 } 00:12:42.339 ]' 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.339 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.340 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:42.340 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.340 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.340 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.340 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.599 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:42.599 17:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:43.532 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.532 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:43.532 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.532 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.532 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.532 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.532 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:43.532 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.791 17:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:44.049 00:12:44.307 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:44.307 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:44.307 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:44.566 { 00:12:44.566 "auth": { 00:12:44.566 "dhgroup": "ffdhe4096", 00:12:44.566 "digest": "sha256", 00:12:44.566 "state": "completed" 00:12:44.566 }, 00:12:44.566 "cntlid": 27, 00:12:44.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:44.566 "listen_address": { 00:12:44.566 "adrfam": "IPv4", 00:12:44.566 "traddr": "10.0.0.3", 00:12:44.566 "trsvcid": "4420", 00:12:44.566 "trtype": "TCP" 00:12:44.566 }, 00:12:44.566 "peer_address": { 00:12:44.566 "adrfam": "IPv4", 00:12:44.566 "traddr": "10.0.0.1", 00:12:44.566 "trsvcid": "41640", 00:12:44.566 "trtype": "TCP" 00:12:44.566 }, 00:12:44.566 "qid": 0, 00:12:44.566 "state": "enabled", 00:12:44.566 "thread": "nvmf_tgt_poll_group_000" 00:12:44.566 } 00:12:44.566 ]' 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.566 17:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.131 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:45.131 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:45.695 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:45.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:45.695 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:45.695 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.695 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.695 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.695 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:45.695 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:45.695 17:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.952 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.953 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.953 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.953 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.953 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.518 00:12:46.518 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.518 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.518 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.776 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.776 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.776 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.776 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.776 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.776 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.776 { 00:12:46.776 "auth": { 00:12:46.776 "dhgroup": "ffdhe4096", 00:12:46.776 "digest": "sha256", 00:12:46.777 "state": "completed" 00:12:46.777 }, 00:12:46.777 "cntlid": 29, 00:12:46.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:46.777 "listen_address": { 00:12:46.777 "adrfam": "IPv4", 00:12:46.777 "traddr": "10.0.0.3", 00:12:46.777 "trsvcid": "4420", 00:12:46.777 "trtype": "TCP" 00:12:46.777 }, 00:12:46.777 "peer_address": { 00:12:46.777 "adrfam": "IPv4", 00:12:46.777 "traddr": "10.0.0.1", 00:12:46.777 "trsvcid": "41674", 00:12:46.777 "trtype": "TCP" 00:12:46.777 }, 00:12:46.777 "qid": 0, 00:12:46.777 "state": "enabled", 00:12:46.777 "thread": "nvmf_tgt_poll_group_000" 00:12:46.777 } 00:12:46.777 ]' 00:12:46.777 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.777 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:46.777 17:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.777 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:46.777 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.777 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.777 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.777 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.342 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:47.342 17:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:47.907 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.907 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:47.907 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.907 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.907 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.907 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.907 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:47.907 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.164 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.729 00:12:48.729 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.729 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.729 17:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.988 { 00:12:48.988 "auth": { 00:12:48.988 "dhgroup": "ffdhe4096", 00:12:48.988 "digest": "sha256", 00:12:48.988 "state": "completed" 00:12:48.988 }, 00:12:48.988 "cntlid": 31, 00:12:48.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:48.988 "listen_address": { 00:12:48.988 "adrfam": "IPv4", 00:12:48.988 "traddr": "10.0.0.3", 00:12:48.988 "trsvcid": "4420", 00:12:48.988 "trtype": "TCP" 00:12:48.988 }, 00:12:48.988 "peer_address": { 00:12:48.988 "adrfam": "IPv4", 00:12:48.988 "traddr": "10.0.0.1", 00:12:48.988 "trsvcid": "41698", 00:12:48.988 "trtype": "TCP" 00:12:48.988 }, 00:12:48.988 "qid": 0, 00:12:48.988 "state": "enabled", 00:12:48.988 "thread": "nvmf_tgt_poll_group_000" 00:12:48.988 } 00:12:48.988 ]' 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:48.988 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.247 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.247 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.247 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:49.505 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:49.505 17:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:50.071 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.637 17:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.896 00:12:50.896 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.896 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.896 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.462 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.462 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.462 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.462 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.462 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.462 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.462 { 00:12:51.462 "auth": { 00:12:51.463 "dhgroup": "ffdhe6144", 00:12:51.463 "digest": "sha256", 00:12:51.463 "state": "completed" 00:12:51.463 }, 00:12:51.463 "cntlid": 33, 00:12:51.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:51.463 "listen_address": { 00:12:51.463 "adrfam": "IPv4", 00:12:51.463 "traddr": "10.0.0.3", 00:12:51.463 "trsvcid": "4420", 00:12:51.463 "trtype": "TCP" 00:12:51.463 }, 00:12:51.463 "peer_address": { 00:12:51.463 "adrfam": "IPv4", 00:12:51.463 "traddr": "10.0.0.1", 00:12:51.463 "trsvcid": "41728", 00:12:51.463 "trtype": "TCP" 00:12:51.463 }, 00:12:51.463 "qid": 0, 00:12:51.463 "state": "enabled", 00:12:51.463 "thread": "nvmf_tgt_poll_group_000" 00:12:51.463 } 00:12:51.463 ]' 00:12:51.463 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.463 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:51.463 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.463 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:51.463 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.463 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.463 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.463 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.721 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:51.721 17:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:12:52.677 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.677 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:52.677 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.677 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.677 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.677 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.677 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:52.677 17:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:52.933 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:53.497 00:12:53.497 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.497 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.497 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.754 { 00:12:53.754 "auth": { 00:12:53.754 "dhgroup": "ffdhe6144", 00:12:53.754 "digest": "sha256", 00:12:53.754 "state": "completed" 00:12:53.754 }, 00:12:53.754 "cntlid": 35, 00:12:53.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:53.754 "listen_address": { 00:12:53.754 "adrfam": "IPv4", 00:12:53.754 "traddr": "10.0.0.3", 00:12:53.754 "trsvcid": "4420", 00:12:53.754 "trtype": "TCP" 00:12:53.754 }, 00:12:53.754 "peer_address": { 00:12:53.754 "adrfam": "IPv4", 00:12:53.754 "traddr": "10.0.0.1", 00:12:53.754 "trsvcid": "41764", 00:12:53.754 "trtype": "TCP" 00:12:53.754 }, 00:12:53.754 "qid": 0, 00:12:53.754 "state": "enabled", 00:12:53.754 "thread": "nvmf_tgt_poll_group_000" 00:12:53.754 } 00:12:53.754 ]' 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:53.754 17:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.754 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.754 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.754 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.318 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:54.318 17:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:12:54.884 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.884 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:54.884 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.884 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.884 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.884 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.884 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:54.884 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.142 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:55.708 00:12:55.708 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.708 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.708 17:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.966 { 00:12:55.966 "auth": { 00:12:55.966 "dhgroup": "ffdhe6144", 00:12:55.966 "digest": "sha256", 00:12:55.966 "state": "completed" 00:12:55.966 }, 00:12:55.966 "cntlid": 37, 00:12:55.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:55.966 "listen_address": { 00:12:55.966 "adrfam": "IPv4", 00:12:55.966 "traddr": "10.0.0.3", 00:12:55.966 "trsvcid": "4420", 00:12:55.966 "trtype": "TCP" 00:12:55.966 }, 00:12:55.966 "peer_address": { 00:12:55.966 "adrfam": "IPv4", 00:12:55.966 "traddr": "10.0.0.1", 00:12:55.966 "trsvcid": "49802", 00:12:55.966 "trtype": "TCP" 00:12:55.966 }, 00:12:55.966 "qid": 0, 00:12:55.966 "state": "enabled", 00:12:55.966 "thread": "nvmf_tgt_poll_group_000" 00:12:55.966 } 00:12:55.966 ]' 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.966 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.225 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:56.225 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.225 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.225 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.225 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:56.483 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:56.483 17:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:12:57.049 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:57.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:57.049 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:57.049 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.049 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.049 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.049 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:57.049 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:57.049 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.306 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.563 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.563 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:57.563 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:57.563 17:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:58.129 00:12:58.129 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.129 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.129 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.387 { 00:12:58.387 "auth": { 00:12:58.387 "dhgroup": "ffdhe6144", 00:12:58.387 "digest": "sha256", 00:12:58.387 "state": "completed" 00:12:58.387 }, 00:12:58.387 "cntlid": 39, 00:12:58.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:12:58.387 "listen_address": { 00:12:58.387 "adrfam": "IPv4", 00:12:58.387 "traddr": "10.0.0.3", 00:12:58.387 "trsvcid": "4420", 00:12:58.387 "trtype": "TCP" 00:12:58.387 }, 00:12:58.387 "peer_address": { 00:12:58.387 "adrfam": "IPv4", 00:12:58.387 "traddr": "10.0.0.1", 00:12:58.387 "trsvcid": "49826", 00:12:58.387 "trtype": "TCP" 00:12:58.387 }, 00:12:58.387 "qid": 0, 00:12:58.387 "state": "enabled", 00:12:58.387 "thread": "nvmf_tgt_poll_group_000" 00:12:58.387 } 00:12:58.387 ]' 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.387 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:58.954 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:58.954 17:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:59.521 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:59.780 17:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:00.347 00:13:00.347 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.347 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.347 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.913 { 00:13:00.913 "auth": { 00:13:00.913 "dhgroup": "ffdhe8192", 00:13:00.913 "digest": "sha256", 00:13:00.913 "state": "completed" 00:13:00.913 }, 00:13:00.913 "cntlid": 41, 00:13:00.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:00.913 "listen_address": { 00:13:00.913 "adrfam": "IPv4", 00:13:00.913 "traddr": "10.0.0.3", 00:13:00.913 "trsvcid": "4420", 00:13:00.913 "trtype": "TCP" 00:13:00.913 }, 00:13:00.913 "peer_address": { 00:13:00.913 "adrfam": "IPv4", 00:13:00.913 "traddr": "10.0.0.1", 00:13:00.913 "trsvcid": "49856", 00:13:00.913 "trtype": "TCP" 00:13:00.913 }, 00:13:00.913 "qid": 0, 00:13:00.913 "state": "enabled", 00:13:00.913 "thread": "nvmf_tgt_poll_group_000" 00:13:00.913 } 00:13:00.913 ]' 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:00.913 17:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.913 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:00.913 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.913 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.913 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.913 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.171 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:01.171 17:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:02.103 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.103 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:02.103 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.103 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.103 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.103 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.103 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:02.103 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.362 17:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:02.927 00:13:03.185 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.185 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.185 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.443 { 00:13:03.443 "auth": { 00:13:03.443 "dhgroup": "ffdhe8192", 00:13:03.443 "digest": "sha256", 00:13:03.443 "state": "completed" 00:13:03.443 }, 00:13:03.443 "cntlid": 43, 00:13:03.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:03.443 "listen_address": { 00:13:03.443 "adrfam": "IPv4", 00:13:03.443 "traddr": "10.0.0.3", 00:13:03.443 "trsvcid": "4420", 00:13:03.443 "trtype": "TCP" 00:13:03.443 }, 00:13:03.443 "peer_address": { 00:13:03.443 "adrfam": "IPv4", 00:13:03.443 "traddr": "10.0.0.1", 00:13:03.443 "trsvcid": "49880", 00:13:03.443 "trtype": "TCP" 00:13:03.443 }, 00:13:03.443 "qid": 0, 00:13:03.443 "state": "enabled", 00:13:03.443 "thread": "nvmf_tgt_poll_group_000" 00:13:03.443 } 00:13:03.443 ]' 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.443 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.702 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:03.702 17:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:04.637 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.637 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:04.637 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.637 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.637 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.637 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.637 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:04.637 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.895 17:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.895 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.895 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.895 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:04.895 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:05.462 00:13:05.719 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.719 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.719 17:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.977 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.977 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.977 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.977 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.977 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.977 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.977 { 00:13:05.977 "auth": { 00:13:05.977 "dhgroup": "ffdhe8192", 00:13:05.977 "digest": "sha256", 00:13:05.977 "state": "completed" 00:13:05.977 }, 00:13:05.977 "cntlid": 45, 00:13:05.977 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:05.977 "listen_address": { 00:13:05.977 "adrfam": "IPv4", 00:13:05.977 "traddr": "10.0.0.3", 00:13:05.977 "trsvcid": "4420", 00:13:05.977 "trtype": "TCP" 00:13:05.977 }, 00:13:05.977 "peer_address": { 00:13:05.977 "adrfam": "IPv4", 00:13:05.977 "traddr": "10.0.0.1", 00:13:05.977 "trsvcid": "55158", 00:13:05.977 "trtype": "TCP" 00:13:05.977 }, 00:13:05.977 "qid": 0, 00:13:05.977 "state": "enabled", 00:13:05.977 "thread": "nvmf_tgt_poll_group_000" 00:13:05.977 } 00:13:05.977 ]' 00:13:05.977 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.978 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:05.978 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.978 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:05.978 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.978 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.978 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.978 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.543 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:06.543 17:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:07.108 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.108 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:07.108 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.108 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.108 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.108 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.108 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:07.108 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:07.676 17:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.243 00:13:08.243 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.243 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.243 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.502 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.502 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.502 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.502 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.502 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.502 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.502 { 00:13:08.502 "auth": { 00:13:08.502 "dhgroup": "ffdhe8192", 00:13:08.502 "digest": "sha256", 00:13:08.502 "state": "completed" 00:13:08.502 }, 00:13:08.502 "cntlid": 47, 00:13:08.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:08.502 "listen_address": { 00:13:08.502 "adrfam": "IPv4", 00:13:08.502 "traddr": "10.0.0.3", 00:13:08.502 "trsvcid": "4420", 00:13:08.502 "trtype": "TCP" 00:13:08.502 }, 00:13:08.502 "peer_address": { 00:13:08.502 "adrfam": "IPv4", 00:13:08.502 "traddr": "10.0.0.1", 00:13:08.502 "trsvcid": "55182", 00:13:08.502 "trtype": "TCP" 00:13:08.502 }, 00:13:08.502 "qid": 0, 00:13:08.502 "state": "enabled", 00:13:08.502 "thread": "nvmf_tgt_poll_group_000" 00:13:08.502 } 00:13:08.502 ]' 00:13:08.502 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.761 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:08.761 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.761 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:08.761 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.761 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.761 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.761 17:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.020 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:09.020 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:10.011 17:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:10.011 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.270 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.529 00:13:10.529 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.529 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.529 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.788 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.788 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.789 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.789 17:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.789 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.789 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.789 { 00:13:10.789 "auth": { 00:13:10.789 "dhgroup": "null", 00:13:10.789 "digest": "sha384", 00:13:10.789 "state": "completed" 00:13:10.789 }, 00:13:10.789 "cntlid": 49, 00:13:10.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:10.789 "listen_address": { 00:13:10.789 "adrfam": "IPv4", 00:13:10.789 "traddr": "10.0.0.3", 00:13:10.789 "trsvcid": "4420", 00:13:10.789 "trtype": "TCP" 00:13:10.789 }, 00:13:10.789 "peer_address": { 00:13:10.789 "adrfam": "IPv4", 00:13:10.789 "traddr": "10.0.0.1", 00:13:10.789 "trsvcid": "55220", 00:13:10.789 "trtype": "TCP" 00:13:10.789 }, 00:13:10.789 "qid": 0, 00:13:10.789 "state": "enabled", 00:13:10.789 "thread": "nvmf_tgt_poll_group_000" 00:13:10.789 } 00:13:10.789 ]' 00:13:10.789 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.789 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:10.789 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.047 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:11.048 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.048 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.048 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.048 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.307 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:11.307 17:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:12.244 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.244 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:12.244 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.244 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.244 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.244 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.244 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:12.244 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.503 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.761 00:13:12.761 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.761 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.761 17:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.018 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.018 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.018 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.018 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.018 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.018 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.018 { 00:13:13.018 "auth": { 00:13:13.018 "dhgroup": "null", 00:13:13.018 "digest": "sha384", 00:13:13.018 "state": "completed" 00:13:13.018 }, 00:13:13.018 "cntlid": 51, 00:13:13.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:13.018 "listen_address": { 00:13:13.018 "adrfam": "IPv4", 00:13:13.018 "traddr": "10.0.0.3", 00:13:13.018 "trsvcid": "4420", 00:13:13.018 "trtype": "TCP" 00:13:13.018 }, 00:13:13.018 "peer_address": { 00:13:13.018 "adrfam": "IPv4", 00:13:13.018 "traddr": "10.0.0.1", 00:13:13.018 "trsvcid": "55234", 00:13:13.018 "trtype": "TCP" 00:13:13.018 }, 00:13:13.018 "qid": 0, 00:13:13.018 "state": "enabled", 00:13:13.018 "thread": "nvmf_tgt_poll_group_000" 00:13:13.018 } 00:13:13.018 ]' 00:13:13.018 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.276 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:13.276 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.276 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:13.276 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.276 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.276 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.276 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.533 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:13.533 17:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:14.466 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.466 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:14.466 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.466 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.466 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.466 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.466 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.466 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.724 17:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.983 00:13:14.983 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.983 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.983 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.241 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.241 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.241 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.241 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.241 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.241 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.241 { 00:13:15.241 "auth": { 00:13:15.241 "dhgroup": "null", 00:13:15.241 "digest": "sha384", 00:13:15.241 "state": "completed" 00:13:15.241 }, 00:13:15.241 "cntlid": 53, 00:13:15.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:15.241 "listen_address": { 00:13:15.241 "adrfam": "IPv4", 00:13:15.241 "traddr": "10.0.0.3", 00:13:15.241 "trsvcid": "4420", 00:13:15.241 "trtype": "TCP" 00:13:15.241 }, 00:13:15.241 "peer_address": { 00:13:15.241 "adrfam": "IPv4", 00:13:15.241 "traddr": "10.0.0.1", 00:13:15.241 "trsvcid": "46430", 00:13:15.241 "trtype": "TCP" 00:13:15.241 }, 00:13:15.241 "qid": 0, 00:13:15.241 "state": "enabled", 00:13:15.241 "thread": "nvmf_tgt_poll_group_000" 00:13:15.241 } 00:13:15.241 ]' 00:13:15.241 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.500 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:15.500 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.500 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:15.500 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.500 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.500 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.500 17:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.758 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:15.758 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.713 17:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:17.277 00:13:17.277 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.277 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.277 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.597 { 00:13:17.597 "auth": { 00:13:17.597 "dhgroup": "null", 00:13:17.597 "digest": "sha384", 00:13:17.597 "state": "completed" 00:13:17.597 }, 00:13:17.597 "cntlid": 55, 00:13:17.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:17.597 "listen_address": { 00:13:17.597 "adrfam": "IPv4", 00:13:17.597 "traddr": "10.0.0.3", 00:13:17.597 "trsvcid": "4420", 00:13:17.597 "trtype": "TCP" 00:13:17.597 }, 00:13:17.597 "peer_address": { 00:13:17.597 "adrfam": "IPv4", 00:13:17.597 "traddr": "10.0.0.1", 00:13:17.597 "trsvcid": "46468", 00:13:17.597 "trtype": "TCP" 00:13:17.597 }, 00:13:17.597 "qid": 0, 00:13:17.597 "state": "enabled", 00:13:17.597 "thread": "nvmf_tgt_poll_group_000" 00:13:17.597 } 00:13:17.597 ]' 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:17.597 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.854 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.854 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.854 17:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.111 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:18.111 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:18.676 17:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.933 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:19.497 00:13:19.497 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.497 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.497 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.754 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.754 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.754 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.754 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.754 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.754 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.754 { 00:13:19.754 "auth": { 00:13:19.754 "dhgroup": "ffdhe2048", 00:13:19.754 "digest": "sha384", 00:13:19.754 "state": "completed" 00:13:19.754 }, 00:13:19.754 "cntlid": 57, 00:13:19.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:19.754 "listen_address": { 00:13:19.754 "adrfam": "IPv4", 00:13:19.754 "traddr": "10.0.0.3", 00:13:19.755 "trsvcid": "4420", 00:13:19.755 "trtype": "TCP" 00:13:19.755 }, 00:13:19.755 "peer_address": { 00:13:19.755 "adrfam": "IPv4", 00:13:19.755 "traddr": "10.0.0.1", 00:13:19.755 "trsvcid": "46498", 00:13:19.755 "trtype": "TCP" 00:13:19.755 }, 00:13:19.755 "qid": 0, 00:13:19.755 "state": "enabled", 00:13:19.755 "thread": "nvmf_tgt_poll_group_000" 00:13:19.755 } 00:13:19.755 ]' 00:13:19.755 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.755 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.755 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.755 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:19.755 17:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.755 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.755 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.755 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.320 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:20.320 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:20.886 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.886 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:20.886 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.886 17:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.886 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.886 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.886 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:20.886 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.144 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:21.710 00:13:21.710 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:21.710 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:21.710 17:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:21.969 { 00:13:21.969 "auth": { 00:13:21.969 "dhgroup": "ffdhe2048", 00:13:21.969 "digest": "sha384", 00:13:21.969 "state": "completed" 00:13:21.969 }, 00:13:21.969 "cntlid": 59, 00:13:21.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:21.969 "listen_address": { 00:13:21.969 "adrfam": "IPv4", 00:13:21.969 "traddr": "10.0.0.3", 00:13:21.969 "trsvcid": "4420", 00:13:21.969 "trtype": "TCP" 00:13:21.969 }, 00:13:21.969 "peer_address": { 00:13:21.969 "adrfam": "IPv4", 00:13:21.969 "traddr": "10.0.0.1", 00:13:21.969 "trsvcid": "46536", 00:13:21.969 "trtype": "TCP" 00:13:21.969 }, 00:13:21.969 "qid": 0, 00:13:21.969 "state": "enabled", 00:13:21.969 "thread": "nvmf_tgt_poll_group_000" 00:13:21.969 } 00:13:21.969 ]' 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.969 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.252 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:22.252 17:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:23.194 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.195 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:23.779 00:13:23.779 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:23.779 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:23.779 17:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.038 { 00:13:24.038 "auth": { 00:13:24.038 "dhgroup": "ffdhe2048", 00:13:24.038 "digest": "sha384", 00:13:24.038 "state": "completed" 00:13:24.038 }, 00:13:24.038 "cntlid": 61, 00:13:24.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:24.038 "listen_address": { 00:13:24.038 "adrfam": "IPv4", 00:13:24.038 "traddr": "10.0.0.3", 00:13:24.038 "trsvcid": "4420", 00:13:24.038 "trtype": "TCP" 00:13:24.038 }, 00:13:24.038 "peer_address": { 00:13:24.038 "adrfam": "IPv4", 00:13:24.038 "traddr": "10.0.0.1", 00:13:24.038 "trsvcid": "42086", 00:13:24.038 "trtype": "TCP" 00:13:24.038 }, 00:13:24.038 "qid": 0, 00:13:24.038 "state": "enabled", 00:13:24.038 "thread": "nvmf_tgt_poll_group_000" 00:13:24.038 } 00:13:24.038 ]' 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:24.038 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.297 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.297 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.297 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.554 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:24.554 17:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:25.490 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.490 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:25.490 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.490 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.490 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.490 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.490 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:25.490 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:25.748 17:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:26.004 00:13:26.004 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.004 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.004 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.261 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.261 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.261 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.261 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.519 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.519 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.519 { 00:13:26.519 "auth": { 00:13:26.519 "dhgroup": "ffdhe2048", 00:13:26.519 "digest": "sha384", 00:13:26.519 "state": "completed" 00:13:26.519 }, 00:13:26.519 "cntlid": 63, 00:13:26.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:26.519 "listen_address": { 00:13:26.519 "adrfam": "IPv4", 00:13:26.519 "traddr": "10.0.0.3", 00:13:26.519 "trsvcid": "4420", 00:13:26.519 "trtype": "TCP" 00:13:26.519 }, 00:13:26.519 "peer_address": { 00:13:26.519 "adrfam": "IPv4", 00:13:26.519 "traddr": "10.0.0.1", 00:13:26.519 "trsvcid": "42110", 00:13:26.519 "trtype": "TCP" 00:13:26.519 }, 00:13:26.519 "qid": 0, 00:13:26.519 "state": "enabled", 00:13:26.519 "thread": "nvmf_tgt_poll_group_000" 00:13:26.519 } 00:13:26.520 ]' 00:13:26.520 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.520 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.520 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.520 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:26.520 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.520 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.520 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.520 17:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.777 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:26.777 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:27.711 17:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.970 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:28.228 00:13:28.228 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.228 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.228 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.794 { 00:13:28.794 "auth": { 00:13:28.794 "dhgroup": "ffdhe3072", 00:13:28.794 "digest": "sha384", 00:13:28.794 "state": "completed" 00:13:28.794 }, 00:13:28.794 "cntlid": 65, 00:13:28.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:28.794 "listen_address": { 00:13:28.794 "adrfam": "IPv4", 00:13:28.794 "traddr": "10.0.0.3", 00:13:28.794 "trsvcid": "4420", 00:13:28.794 "trtype": "TCP" 00:13:28.794 }, 00:13:28.794 "peer_address": { 00:13:28.794 "adrfam": "IPv4", 00:13:28.794 "traddr": "10.0.0.1", 00:13:28.794 "trsvcid": "42144", 00:13:28.794 "trtype": "TCP" 00:13:28.794 }, 00:13:28.794 "qid": 0, 00:13:28.794 "state": "enabled", 00:13:28.794 "thread": "nvmf_tgt_poll_group_000" 00:13:28.794 } 00:13:28.794 ]' 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.794 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.795 17:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.053 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:29.053 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:29.989 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.989 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:29.989 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.989 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.989 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.989 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.989 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:29.989 17:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.989 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.990 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.990 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.990 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.990 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:30.557 00:13:30.557 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.557 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.557 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.818 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.818 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.818 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.818 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.818 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.818 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.818 { 00:13:30.818 "auth": { 00:13:30.818 "dhgroup": "ffdhe3072", 00:13:30.818 "digest": "sha384", 00:13:30.818 "state": "completed" 00:13:30.818 }, 00:13:30.818 "cntlid": 67, 00:13:30.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:30.818 "listen_address": { 00:13:30.818 "adrfam": "IPv4", 00:13:30.818 "traddr": "10.0.0.3", 00:13:30.818 "trsvcid": "4420", 00:13:30.818 "trtype": "TCP" 00:13:30.818 }, 00:13:30.818 "peer_address": { 00:13:30.818 "adrfam": "IPv4", 00:13:30.818 "traddr": "10.0.0.1", 00:13:30.818 "trsvcid": "42158", 00:13:30.818 "trtype": "TCP" 00:13:30.818 }, 00:13:30.818 "qid": 0, 00:13:30.818 "state": "enabled", 00:13:30.818 "thread": "nvmf_tgt_poll_group_000" 00:13:30.818 } 00:13:30.818 ]' 00:13:30.818 17:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.818 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:30.818 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.818 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.818 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.818 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.818 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.818 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.385 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:31.385 17:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:31.950 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.950 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:31.950 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.950 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.950 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.950 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.950 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:31.950 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:32.207 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:32.207 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.207 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:32.207 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:32.207 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:32.207 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.207 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.207 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.208 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.208 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.208 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.208 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.208 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:32.773 00:13:32.773 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.773 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.773 17:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.030 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.030 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.030 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.030 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.030 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.030 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.030 { 00:13:33.030 "auth": { 00:13:33.030 "dhgroup": "ffdhe3072", 00:13:33.030 "digest": "sha384", 00:13:33.030 "state": "completed" 00:13:33.030 }, 00:13:33.030 "cntlid": 69, 00:13:33.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:33.030 "listen_address": { 00:13:33.030 "adrfam": "IPv4", 00:13:33.030 "traddr": "10.0.0.3", 00:13:33.030 "trsvcid": "4420", 00:13:33.030 "trtype": "TCP" 00:13:33.030 }, 00:13:33.030 "peer_address": { 00:13:33.030 "adrfam": "IPv4", 00:13:33.030 "traddr": "10.0.0.1", 00:13:33.030 "trsvcid": "42180", 00:13:33.030 "trtype": "TCP" 00:13:33.031 }, 00:13:33.031 "qid": 0, 00:13:33.031 "state": "enabled", 00:13:33.031 "thread": "nvmf_tgt_poll_group_000" 00:13:33.031 } 00:13:33.031 ]' 00:13:33.031 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.031 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:33.031 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.031 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:33.031 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.031 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.031 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.031 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.594 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:33.594 17:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:34.161 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.161 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:34.161 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.161 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.161 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.161 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.161 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:34.161 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.419 17:13:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.999 00:13:34.999 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.999 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.999 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.257 { 00:13:35.257 "auth": { 00:13:35.257 "dhgroup": "ffdhe3072", 00:13:35.257 "digest": "sha384", 00:13:35.257 "state": "completed" 00:13:35.257 }, 00:13:35.257 "cntlid": 71, 00:13:35.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:35.257 "listen_address": { 00:13:35.257 "adrfam": "IPv4", 00:13:35.257 "traddr": "10.0.0.3", 00:13:35.257 "trsvcid": "4420", 00:13:35.257 "trtype": "TCP" 00:13:35.257 }, 00:13:35.257 "peer_address": { 00:13:35.257 "adrfam": "IPv4", 00:13:35.257 "traddr": "10.0.0.1", 00:13:35.257 "trsvcid": "60324", 00:13:35.257 "trtype": "TCP" 00:13:35.257 }, 00:13:35.257 "qid": 0, 00:13:35.257 "state": "enabled", 00:13:35.257 "thread": "nvmf_tgt_poll_group_000" 00:13:35.257 } 00:13:35.257 ]' 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:35.257 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.515 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.515 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.515 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.783 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:35.783 17:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:36.372 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.630 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.888 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.889 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.889 17:13:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.148 00:13:37.148 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.148 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.148 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.407 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.407 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.407 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.407 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.407 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.407 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.407 { 00:13:37.407 "auth": { 00:13:37.407 "dhgroup": "ffdhe4096", 00:13:37.407 "digest": "sha384", 00:13:37.407 "state": "completed" 00:13:37.407 }, 00:13:37.407 "cntlid": 73, 00:13:37.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:37.407 "listen_address": { 00:13:37.407 "adrfam": "IPv4", 00:13:37.407 "traddr": "10.0.0.3", 00:13:37.407 "trsvcid": "4420", 00:13:37.407 "trtype": "TCP" 00:13:37.407 }, 00:13:37.407 "peer_address": { 00:13:37.407 "adrfam": "IPv4", 00:13:37.407 "traddr": "10.0.0.1", 00:13:37.407 "trsvcid": "60358", 00:13:37.407 "trtype": "TCP" 00:13:37.407 }, 00:13:37.407 "qid": 0, 00:13:37.407 "state": "enabled", 00:13:37.407 "thread": "nvmf_tgt_poll_group_000" 00:13:37.407 } 00:13:37.407 ]' 00:13:37.407 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.665 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:37.665 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.665 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:37.666 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.666 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.666 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.666 17:13:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.924 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:37.924 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:38.861 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.861 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:38.861 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.861 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.861 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.861 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.861 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:38.861 17:13:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.861 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.428 00:13:39.428 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.428 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.428 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.687 { 00:13:39.687 "auth": { 00:13:39.687 "dhgroup": "ffdhe4096", 00:13:39.687 "digest": "sha384", 00:13:39.687 "state": "completed" 00:13:39.687 }, 00:13:39.687 "cntlid": 75, 00:13:39.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:39.687 "listen_address": { 00:13:39.687 "adrfam": "IPv4", 00:13:39.687 "traddr": "10.0.0.3", 00:13:39.687 "trsvcid": "4420", 00:13:39.687 "trtype": "TCP" 00:13:39.687 }, 00:13:39.687 "peer_address": { 00:13:39.687 "adrfam": "IPv4", 00:13:39.687 "traddr": "10.0.0.1", 00:13:39.687 "trsvcid": "60388", 00:13:39.687 "trtype": "TCP" 00:13:39.687 }, 00:13:39.687 "qid": 0, 00:13:39.687 "state": "enabled", 00:13:39.687 "thread": "nvmf_tgt_poll_group_000" 00:13:39.687 } 00:13:39.687 ]' 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.687 17:13:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.254 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:40.254 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:40.821 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.821 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:40.821 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.821 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.821 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.821 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.821 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:40.821 17:13:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.080 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.648 00:13:41.648 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.648 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.648 17:13:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.907 { 00:13:41.907 "auth": { 00:13:41.907 "dhgroup": "ffdhe4096", 00:13:41.907 "digest": "sha384", 00:13:41.907 "state": "completed" 00:13:41.907 }, 00:13:41.907 "cntlid": 77, 00:13:41.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:41.907 "listen_address": { 00:13:41.907 "adrfam": "IPv4", 00:13:41.907 "traddr": "10.0.0.3", 00:13:41.907 "trsvcid": "4420", 00:13:41.907 "trtype": "TCP" 00:13:41.907 }, 00:13:41.907 "peer_address": { 00:13:41.907 "adrfam": "IPv4", 00:13:41.907 "traddr": "10.0.0.1", 00:13:41.907 "trsvcid": "60420", 00:13:41.907 "trtype": "TCP" 00:13:41.907 }, 00:13:41.907 "qid": 0, 00:13:41.907 "state": "enabled", 00:13:41.907 "thread": "nvmf_tgt_poll_group_000" 00:13:41.907 } 00:13:41.907 ]' 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.907 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.166 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:42.166 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.166 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.166 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.166 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.425 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:42.425 17:13:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:43.400 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.400 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:43.400 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.400 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.400 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.400 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.400 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:43.400 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.658 17:13:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.916 00:13:43.916 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.916 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.916 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.174 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.174 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.174 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.174 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.174 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.174 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.174 { 00:13:44.174 "auth": { 00:13:44.174 "dhgroup": "ffdhe4096", 00:13:44.174 "digest": "sha384", 00:13:44.174 "state": "completed" 00:13:44.174 }, 00:13:44.174 "cntlid": 79, 00:13:44.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:44.174 "listen_address": { 00:13:44.174 "adrfam": "IPv4", 00:13:44.174 "traddr": "10.0.0.3", 00:13:44.174 "trsvcid": "4420", 00:13:44.174 "trtype": "TCP" 00:13:44.174 }, 00:13:44.174 "peer_address": { 00:13:44.174 "adrfam": "IPv4", 00:13:44.174 "traddr": "10.0.0.1", 00:13:44.174 "trsvcid": "37040", 00:13:44.174 "trtype": "TCP" 00:13:44.174 }, 00:13:44.174 "qid": 0, 00:13:44.174 "state": "enabled", 00:13:44.174 "thread": "nvmf_tgt_poll_group_000" 00:13:44.174 } 00:13:44.174 ]' 00:13:44.174 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.432 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:44.432 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.432 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:44.432 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.432 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.432 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.432 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.691 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:44.691 17:13:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:45.628 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.887 17:13:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:46.453 00:13:46.453 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.453 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.453 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.711 { 00:13:46.711 "auth": { 00:13:46.711 "dhgroup": "ffdhe6144", 00:13:46.711 "digest": "sha384", 00:13:46.711 "state": "completed" 00:13:46.711 }, 00:13:46.711 "cntlid": 81, 00:13:46.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:46.711 "listen_address": { 00:13:46.711 "adrfam": "IPv4", 00:13:46.711 "traddr": "10.0.0.3", 00:13:46.711 "trsvcid": "4420", 00:13:46.711 "trtype": "TCP" 00:13:46.711 }, 00:13:46.711 "peer_address": { 00:13:46.711 "adrfam": "IPv4", 00:13:46.711 "traddr": "10.0.0.1", 00:13:46.711 "trsvcid": "37080", 00:13:46.711 "trtype": "TCP" 00:13:46.711 }, 00:13:46.711 "qid": 0, 00:13:46.711 "state": "enabled", 00:13:46.711 "thread": "nvmf_tgt_poll_group_000" 00:13:46.711 } 00:13:46.711 ]' 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.711 17:13:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.276 17:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:47.276 17:13:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:47.840 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.840 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:47.840 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.840 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.840 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.840 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.840 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:47.840 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.098 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.664 00:13:48.664 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.664 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.664 17:13:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.923 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.923 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.923 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.923 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.923 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.923 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.923 { 00:13:48.923 "auth": { 00:13:48.923 "dhgroup": "ffdhe6144", 00:13:48.923 "digest": "sha384", 00:13:48.923 "state": "completed" 00:13:48.923 }, 00:13:48.923 "cntlid": 83, 00:13:48.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:48.923 "listen_address": { 00:13:48.923 "adrfam": "IPv4", 00:13:48.923 "traddr": "10.0.0.3", 00:13:48.923 "trsvcid": "4420", 00:13:48.923 "trtype": "TCP" 00:13:48.923 }, 00:13:48.923 "peer_address": { 00:13:48.923 "adrfam": "IPv4", 00:13:48.923 "traddr": "10.0.0.1", 00:13:48.923 "trsvcid": "37110", 00:13:48.923 "trtype": "TCP" 00:13:48.923 }, 00:13:48.923 "qid": 0, 00:13:48.923 "state": "enabled", 00:13:48.923 "thread": "nvmf_tgt_poll_group_000" 00:13:48.923 } 00:13:48.923 ]' 00:13:48.923 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.181 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:49.181 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.181 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:49.181 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.181 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.181 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.181 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.498 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:49.498 17:13:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:50.065 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.065 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:50.065 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.065 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.065 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.065 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.065 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:50.065 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:50.632 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.633 17:13:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.891 00:13:50.891 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.891 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.891 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.459 { 00:13:51.459 "auth": { 00:13:51.459 "dhgroup": "ffdhe6144", 00:13:51.459 "digest": "sha384", 00:13:51.459 "state": "completed" 00:13:51.459 }, 00:13:51.459 "cntlid": 85, 00:13:51.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:51.459 "listen_address": { 00:13:51.459 "adrfam": "IPv4", 00:13:51.459 "traddr": "10.0.0.3", 00:13:51.459 "trsvcid": "4420", 00:13:51.459 "trtype": "TCP" 00:13:51.459 }, 00:13:51.459 "peer_address": { 00:13:51.459 "adrfam": "IPv4", 00:13:51.459 "traddr": "10.0.0.1", 00:13:51.459 "trsvcid": "37146", 00:13:51.459 "trtype": "TCP" 00:13:51.459 }, 00:13:51.459 "qid": 0, 00:13:51.459 "state": "enabled", 00:13:51.459 "thread": "nvmf_tgt_poll_group_000" 00:13:51.459 } 00:13:51.459 ]' 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.459 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.716 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:51.716 17:13:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:13:52.647 17:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.647 17:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:52.647 17:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.647 17:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.647 17:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.647 17:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.647 17:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:52.647 17:13:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.903 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.904 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.904 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.468 00:13:53.468 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.468 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.468 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.726 { 00:13:53.726 "auth": { 00:13:53.726 "dhgroup": "ffdhe6144", 00:13:53.726 "digest": "sha384", 00:13:53.726 "state": "completed" 00:13:53.726 }, 00:13:53.726 "cntlid": 87, 00:13:53.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:53.726 "listen_address": { 00:13:53.726 "adrfam": "IPv4", 00:13:53.726 "traddr": "10.0.0.3", 00:13:53.726 "trsvcid": "4420", 00:13:53.726 "trtype": "TCP" 00:13:53.726 }, 00:13:53.726 "peer_address": { 00:13:53.726 "adrfam": "IPv4", 00:13:53.726 "traddr": "10.0.0.1", 00:13:53.726 "trsvcid": "37162", 00:13:53.726 "trtype": "TCP" 00:13:53.726 }, 00:13:53.726 "qid": 0, 00:13:53.726 "state": "enabled", 00:13:53.726 "thread": "nvmf_tgt_poll_group_000" 00:13:53.726 } 00:13:53.726 ]' 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.726 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:53.727 17:13:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.727 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.727 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.727 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.291 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:54.291 17:13:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:54.860 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:55.427 17:13:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.131 00:13:56.131 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.131 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.131 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.390 { 00:13:56.390 "auth": { 00:13:56.390 "dhgroup": "ffdhe8192", 00:13:56.390 "digest": "sha384", 00:13:56.390 "state": "completed" 00:13:56.390 }, 00:13:56.390 "cntlid": 89, 00:13:56.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:56.390 "listen_address": { 00:13:56.390 "adrfam": "IPv4", 00:13:56.390 "traddr": "10.0.0.3", 00:13:56.390 "trsvcid": "4420", 00:13:56.390 "trtype": "TCP" 00:13:56.390 }, 00:13:56.390 "peer_address": { 00:13:56.390 "adrfam": "IPv4", 00:13:56.390 "traddr": "10.0.0.1", 00:13:56.390 "trsvcid": "59002", 00:13:56.390 "trtype": "TCP" 00:13:56.390 }, 00:13:56.390 "qid": 0, 00:13:56.390 "state": "enabled", 00:13:56.390 "thread": "nvmf_tgt_poll_group_000" 00:13:56.390 } 00:13:56.390 ]' 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.390 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.649 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:56.649 17:13:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:13:57.586 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.586 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:13:57.586 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.586 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.586 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.587 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.587 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:57.587 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:57.845 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:57.845 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.845 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.845 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:57.845 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.845 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.845 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.846 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.846 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.846 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.846 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.846 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.846 17:13:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.414 00:13:58.414 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.414 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.414 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.982 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.982 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.982 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.982 17:13:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.982 { 00:13:58.982 "auth": { 00:13:58.982 "dhgroup": "ffdhe8192", 00:13:58.982 "digest": "sha384", 00:13:58.982 "state": "completed" 00:13:58.982 }, 00:13:58.982 "cntlid": 91, 00:13:58.982 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:13:58.982 "listen_address": { 00:13:58.982 "adrfam": "IPv4", 00:13:58.982 "traddr": "10.0.0.3", 00:13:58.982 "trsvcid": "4420", 00:13:58.982 "trtype": "TCP" 00:13:58.982 }, 00:13:58.982 "peer_address": { 00:13:58.982 "adrfam": "IPv4", 00:13:58.982 "traddr": "10.0.0.1", 00:13:58.982 "trsvcid": "59024", 00:13:58.982 "trtype": "TCP" 00:13:58.982 }, 00:13:58.982 "qid": 0, 00:13:58.982 "state": "enabled", 00:13:58.982 "thread": "nvmf_tgt_poll_group_000" 00:13:58.982 } 00:13:58.982 ]' 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.982 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.241 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:13:59.241 17:13:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:00.175 17:13:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.111 00:14:01.111 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.111 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.111 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.370 { 00:14:01.370 "auth": { 00:14:01.370 "dhgroup": "ffdhe8192", 00:14:01.370 "digest": "sha384", 00:14:01.370 "state": "completed" 00:14:01.370 }, 00:14:01.370 "cntlid": 93, 00:14:01.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:01.370 "listen_address": { 00:14:01.370 "adrfam": "IPv4", 00:14:01.370 "traddr": "10.0.0.3", 00:14:01.370 "trsvcid": "4420", 00:14:01.370 "trtype": "TCP" 00:14:01.370 }, 00:14:01.370 "peer_address": { 00:14:01.370 "adrfam": "IPv4", 00:14:01.370 "traddr": "10.0.0.1", 00:14:01.370 "trsvcid": "59032", 00:14:01.370 "trtype": "TCP" 00:14:01.370 }, 00:14:01.370 "qid": 0, 00:14:01.370 "state": "enabled", 00:14:01.370 "thread": "nvmf_tgt_poll_group_000" 00:14:01.370 } 00:14:01.370 ]' 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.370 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.628 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:01.629 17:13:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:02.563 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.563 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:02.563 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.563 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.563 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.563 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.563 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:02.563 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.822 17:13:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:03.784 00:14:03.784 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.784 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.784 17:13:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.784 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.784 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.784 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.784 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.073 { 00:14:04.073 "auth": { 00:14:04.073 "dhgroup": "ffdhe8192", 00:14:04.073 "digest": "sha384", 00:14:04.073 "state": "completed" 00:14:04.073 }, 00:14:04.073 "cntlid": 95, 00:14:04.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:04.073 "listen_address": { 00:14:04.073 "adrfam": "IPv4", 00:14:04.073 "traddr": "10.0.0.3", 00:14:04.073 "trsvcid": "4420", 00:14:04.073 "trtype": "TCP" 00:14:04.073 }, 00:14:04.073 "peer_address": { 00:14:04.073 "adrfam": "IPv4", 00:14:04.073 "traddr": "10.0.0.1", 00:14:04.073 "trsvcid": "59060", 00:14:04.073 "trtype": "TCP" 00:14:04.073 }, 00:14:04.073 "qid": 0, 00:14:04.073 "state": "enabled", 00:14:04.073 "thread": "nvmf_tgt_poll_group_000" 00:14:04.073 } 00:14:04.073 ]' 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.073 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.332 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:04.332 17:13:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:05.268 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:05.526 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:05.526 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.526 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:05.526 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:05.526 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:05.526 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.527 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.527 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.527 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.527 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.527 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.527 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.527 17:13:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:05.784 00:14:05.784 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.784 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.784 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.352 { 00:14:06.352 "auth": { 00:14:06.352 "dhgroup": "null", 00:14:06.352 "digest": "sha512", 00:14:06.352 "state": "completed" 00:14:06.352 }, 00:14:06.352 "cntlid": 97, 00:14:06.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:06.352 "listen_address": { 00:14:06.352 "adrfam": "IPv4", 00:14:06.352 "traddr": "10.0.0.3", 00:14:06.352 "trsvcid": "4420", 00:14:06.352 "trtype": "TCP" 00:14:06.352 }, 00:14:06.352 "peer_address": { 00:14:06.352 "adrfam": "IPv4", 00:14:06.352 "traddr": "10.0.0.1", 00:14:06.352 "trsvcid": "40428", 00:14:06.352 "trtype": "TCP" 00:14:06.352 }, 00:14:06.352 "qid": 0, 00:14:06.352 "state": "enabled", 00:14:06.352 "thread": "nvmf_tgt_poll_group_000" 00:14:06.352 } 00:14:06.352 ]' 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.352 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.611 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:06.611 17:13:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:07.546 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.546 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:07.546 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.546 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.546 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.546 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.546 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:07.546 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.805 17:13:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.064 00:14:08.323 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.323 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.323 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.581 { 00:14:08.581 "auth": { 00:14:08.581 "dhgroup": "null", 00:14:08.581 "digest": "sha512", 00:14:08.581 "state": "completed" 00:14:08.581 }, 00:14:08.581 "cntlid": 99, 00:14:08.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:08.581 "listen_address": { 00:14:08.581 "adrfam": "IPv4", 00:14:08.581 "traddr": "10.0.0.3", 00:14:08.581 "trsvcid": "4420", 00:14:08.581 "trtype": "TCP" 00:14:08.581 }, 00:14:08.581 "peer_address": { 00:14:08.581 "adrfam": "IPv4", 00:14:08.581 "traddr": "10.0.0.1", 00:14:08.581 "trsvcid": "40452", 00:14:08.581 "trtype": "TCP" 00:14:08.581 }, 00:14:08.581 "qid": 0, 00:14:08.581 "state": "enabled", 00:14:08.581 "thread": "nvmf_tgt_poll_group_000" 00:14:08.581 } 00:14:08.581 ]' 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.581 17:13:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.147 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:09.147 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:09.710 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.710 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:09.710 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.710 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.710 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.710 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.710 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:09.710 17:13:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:09.967 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:09.967 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.967 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:09.967 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.968 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.225 00:14:10.482 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.482 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.482 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.741 { 00:14:10.741 "auth": { 00:14:10.741 "dhgroup": "null", 00:14:10.741 "digest": "sha512", 00:14:10.741 "state": "completed" 00:14:10.741 }, 00:14:10.741 "cntlid": 101, 00:14:10.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:10.741 "listen_address": { 00:14:10.741 "adrfam": "IPv4", 00:14:10.741 "traddr": "10.0.0.3", 00:14:10.741 "trsvcid": "4420", 00:14:10.741 "trtype": "TCP" 00:14:10.741 }, 00:14:10.741 "peer_address": { 00:14:10.741 "adrfam": "IPv4", 00:14:10.741 "traddr": "10.0.0.1", 00:14:10.741 "trsvcid": "40490", 00:14:10.741 "trtype": "TCP" 00:14:10.741 }, 00:14:10.741 "qid": 0, 00:14:10.741 "state": "enabled", 00:14:10.741 "thread": "nvmf_tgt_poll_group_000" 00:14:10.741 } 00:14:10.741 ]' 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:10.741 17:13:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.741 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.741 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.741 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.305 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:11.305 17:13:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:11.869 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.870 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:11.870 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.870 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.870 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.870 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.870 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:11.870 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.435 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.693 00:14:12.693 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.693 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.693 17:13:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.951 { 00:14:12.951 "auth": { 00:14:12.951 "dhgroup": "null", 00:14:12.951 "digest": "sha512", 00:14:12.951 "state": "completed" 00:14:12.951 }, 00:14:12.951 "cntlid": 103, 00:14:12.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:12.951 "listen_address": { 00:14:12.951 "adrfam": "IPv4", 00:14:12.951 "traddr": "10.0.0.3", 00:14:12.951 "trsvcid": "4420", 00:14:12.951 "trtype": "TCP" 00:14:12.951 }, 00:14:12.951 "peer_address": { 00:14:12.951 "adrfam": "IPv4", 00:14:12.951 "traddr": "10.0.0.1", 00:14:12.951 "trsvcid": "40516", 00:14:12.951 "trtype": "TCP" 00:14:12.951 }, 00:14:12.951 "qid": 0, 00:14:12.951 "state": "enabled", 00:14:12.951 "thread": "nvmf_tgt_poll_group_000" 00:14:12.951 } 00:14:12.951 ]' 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:12.951 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.227 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:13.227 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.227 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.227 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.227 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.507 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:13.507 17:13:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.442 17:13:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.010 00:14:15.010 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.010 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.010 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.268 { 00:14:15.268 "auth": { 00:14:15.268 "dhgroup": "ffdhe2048", 00:14:15.268 "digest": "sha512", 00:14:15.268 "state": "completed" 00:14:15.268 }, 00:14:15.268 "cntlid": 105, 00:14:15.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:15.268 "listen_address": { 00:14:15.268 "adrfam": "IPv4", 00:14:15.268 "traddr": "10.0.0.3", 00:14:15.268 "trsvcid": "4420", 00:14:15.268 "trtype": "TCP" 00:14:15.268 }, 00:14:15.268 "peer_address": { 00:14:15.268 "adrfam": "IPv4", 00:14:15.268 "traddr": "10.0.0.1", 00:14:15.268 "trsvcid": "38194", 00:14:15.268 "trtype": "TCP" 00:14:15.268 }, 00:14:15.268 "qid": 0, 00:14:15.268 "state": "enabled", 00:14:15.268 "thread": "nvmf_tgt_poll_group_000" 00:14:15.268 } 00:14:15.268 ]' 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.268 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.835 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:15.835 17:13:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:16.403 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.403 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:16.403 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.403 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.403 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.403 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.403 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:16.403 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.663 17:13:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.231 00:14:17.231 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.231 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.231 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.490 { 00:14:17.490 "auth": { 00:14:17.490 "dhgroup": "ffdhe2048", 00:14:17.490 "digest": "sha512", 00:14:17.490 "state": "completed" 00:14:17.490 }, 00:14:17.490 "cntlid": 107, 00:14:17.490 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:17.490 "listen_address": { 00:14:17.490 "adrfam": "IPv4", 00:14:17.490 "traddr": "10.0.0.3", 00:14:17.490 "trsvcid": "4420", 00:14:17.490 "trtype": "TCP" 00:14:17.490 }, 00:14:17.490 "peer_address": { 00:14:17.490 "adrfam": "IPv4", 00:14:17.490 "traddr": "10.0.0.1", 00:14:17.490 "trsvcid": "38218", 00:14:17.490 "trtype": "TCP" 00:14:17.490 }, 00:14:17.490 "qid": 0, 00:14:17.490 "state": "enabled", 00:14:17.490 "thread": "nvmf_tgt_poll_group_000" 00:14:17.490 } 00:14:17.490 ]' 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:17.490 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.748 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:17.748 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.748 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.748 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.748 17:13:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.006 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:18.006 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:18.939 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.939 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:18.939 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.939 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.939 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.939 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.939 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:18.939 17:14:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:19.197 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:19.197 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.198 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.456 00:14:19.456 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.456 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.456 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.715 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.715 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.715 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.715 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.715 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.715 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.715 { 00:14:19.715 "auth": { 00:14:19.715 "dhgroup": "ffdhe2048", 00:14:19.715 "digest": "sha512", 00:14:19.715 "state": "completed" 00:14:19.715 }, 00:14:19.715 "cntlid": 109, 00:14:19.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:19.715 "listen_address": { 00:14:19.715 "adrfam": "IPv4", 00:14:19.715 "traddr": "10.0.0.3", 00:14:19.715 "trsvcid": "4420", 00:14:19.715 "trtype": "TCP" 00:14:19.715 }, 00:14:19.715 "peer_address": { 00:14:19.715 "adrfam": "IPv4", 00:14:19.715 "traddr": "10.0.0.1", 00:14:19.715 "trsvcid": "38240", 00:14:19.715 "trtype": "TCP" 00:14:19.715 }, 00:14:19.715 "qid": 0, 00:14:19.715 "state": "enabled", 00:14:19.715 "thread": "nvmf_tgt_poll_group_000" 00:14:19.715 } 00:14:19.715 ]' 00:14:19.715 17:14:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.973 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:19.973 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.973 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.973 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.973 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.973 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.973 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.540 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:20.540 17:14:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:21.107 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.107 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:21.107 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.107 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.107 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.107 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.107 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:21.107 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.365 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.933 00:14:21.933 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.933 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.933 17:14:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.191 { 00:14:22.191 "auth": { 00:14:22.191 "dhgroup": "ffdhe2048", 00:14:22.191 "digest": "sha512", 00:14:22.191 "state": "completed" 00:14:22.191 }, 00:14:22.191 "cntlid": 111, 00:14:22.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:22.191 "listen_address": { 00:14:22.191 "adrfam": "IPv4", 00:14:22.191 "traddr": "10.0.0.3", 00:14:22.191 "trsvcid": "4420", 00:14:22.191 "trtype": "TCP" 00:14:22.191 }, 00:14:22.191 "peer_address": { 00:14:22.191 "adrfam": "IPv4", 00:14:22.191 "traddr": "10.0.0.1", 00:14:22.191 "trsvcid": "38258", 00:14:22.191 "trtype": "TCP" 00:14:22.191 }, 00:14:22.191 "qid": 0, 00:14:22.191 "state": "enabled", 00:14:22.191 "thread": "nvmf_tgt_poll_group_000" 00:14:22.191 } 00:14:22.191 ]' 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:22.191 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.450 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.450 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.450 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.708 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:22.708 17:14:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:23.276 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.276 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:23.276 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.276 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.532 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.532 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.532 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.532 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:23.532 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.805 17:14:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.062 00:14:24.062 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.062 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.062 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.320 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.320 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.320 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.320 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.320 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.320 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.320 { 00:14:24.320 "auth": { 00:14:24.320 "dhgroup": "ffdhe3072", 00:14:24.320 "digest": "sha512", 00:14:24.320 "state": "completed" 00:14:24.320 }, 00:14:24.320 "cntlid": 113, 00:14:24.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:24.320 "listen_address": { 00:14:24.320 "adrfam": "IPv4", 00:14:24.320 "traddr": "10.0.0.3", 00:14:24.320 "trsvcid": "4420", 00:14:24.320 "trtype": "TCP" 00:14:24.320 }, 00:14:24.320 "peer_address": { 00:14:24.320 "adrfam": "IPv4", 00:14:24.320 "traddr": "10.0.0.1", 00:14:24.320 "trsvcid": "37590", 00:14:24.320 "trtype": "TCP" 00:14:24.320 }, 00:14:24.320 "qid": 0, 00:14:24.320 "state": "enabled", 00:14:24.320 "thread": "nvmf_tgt_poll_group_000" 00:14:24.320 } 00:14:24.320 ]' 00:14:24.320 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.578 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.578 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.578 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:24.578 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.579 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.579 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.579 17:14:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.836 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:24.837 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:25.853 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.853 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.853 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:25.853 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.853 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.853 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.853 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.853 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:25.853 17:14:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.853 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.420 00:14:26.420 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.420 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.420 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.676 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.677 { 00:14:26.677 "auth": { 00:14:26.677 "dhgroup": "ffdhe3072", 00:14:26.677 "digest": "sha512", 00:14:26.677 "state": "completed" 00:14:26.677 }, 00:14:26.677 "cntlid": 115, 00:14:26.677 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:26.677 "listen_address": { 00:14:26.677 "adrfam": "IPv4", 00:14:26.677 "traddr": "10.0.0.3", 00:14:26.677 "trsvcid": "4420", 00:14:26.677 "trtype": "TCP" 00:14:26.677 }, 00:14:26.677 "peer_address": { 00:14:26.677 "adrfam": "IPv4", 00:14:26.677 "traddr": "10.0.0.1", 00:14:26.677 "trsvcid": "37620", 00:14:26.677 "trtype": "TCP" 00:14:26.677 }, 00:14:26.677 "qid": 0, 00:14:26.677 "state": "enabled", 00:14:26.677 "thread": "nvmf_tgt_poll_group_000" 00:14:26.677 } 00:14:26.677 ]' 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:26.677 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.934 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.934 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.934 17:14:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.192 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:27.192 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:27.757 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.757 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:27.757 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.757 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.757 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.757 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.757 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:27.757 17:14:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.015 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.582 00:14:28.582 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.582 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.582 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.840 { 00:14:28.840 "auth": { 00:14:28.840 "dhgroup": "ffdhe3072", 00:14:28.840 "digest": "sha512", 00:14:28.840 "state": "completed" 00:14:28.840 }, 00:14:28.840 "cntlid": 117, 00:14:28.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:28.840 "listen_address": { 00:14:28.840 "adrfam": "IPv4", 00:14:28.840 "traddr": "10.0.0.3", 00:14:28.840 "trsvcid": "4420", 00:14:28.840 "trtype": "TCP" 00:14:28.840 }, 00:14:28.840 "peer_address": { 00:14:28.840 "adrfam": "IPv4", 00:14:28.840 "traddr": "10.0.0.1", 00:14:28.840 "trsvcid": "37632", 00:14:28.840 "trtype": "TCP" 00:14:28.840 }, 00:14:28.840 "qid": 0, 00:14:28.840 "state": "enabled", 00:14:28.840 "thread": "nvmf_tgt_poll_group_000" 00:14:28.840 } 00:14:28.840 ]' 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:28.840 17:14:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.840 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.840 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.840 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.840 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.840 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.098 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:29.098 17:14:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:30.033 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.033 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:30.033 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.033 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.033 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.033 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.033 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:30.033 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.292 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:30.550 00:14:30.550 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.550 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.550 17:14:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.808 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.808 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.808 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.808 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.808 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.808 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.808 { 00:14:30.808 "auth": { 00:14:30.808 "dhgroup": "ffdhe3072", 00:14:30.808 "digest": "sha512", 00:14:30.808 "state": "completed" 00:14:30.808 }, 00:14:30.808 "cntlid": 119, 00:14:30.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:30.808 "listen_address": { 00:14:30.808 "adrfam": "IPv4", 00:14:30.808 "traddr": "10.0.0.3", 00:14:30.808 "trsvcid": "4420", 00:14:30.808 "trtype": "TCP" 00:14:30.808 }, 00:14:30.808 "peer_address": { 00:14:30.808 "adrfam": "IPv4", 00:14:30.808 "traddr": "10.0.0.1", 00:14:30.808 "trsvcid": "37662", 00:14:30.808 "trtype": "TCP" 00:14:30.808 }, 00:14:30.808 "qid": 0, 00:14:30.808 "state": "enabled", 00:14:30.808 "thread": "nvmf_tgt_poll_group_000" 00:14:30.808 } 00:14:30.808 ]' 00:14:30.808 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.066 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:31.066 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.067 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:31.067 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.067 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.067 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.067 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.325 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:31.325 17:14:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:32.259 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.518 17:14:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.776 00:14:33.033 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.033 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.033 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.291 { 00:14:33.291 "auth": { 00:14:33.291 "dhgroup": "ffdhe4096", 00:14:33.291 "digest": "sha512", 00:14:33.291 "state": "completed" 00:14:33.291 }, 00:14:33.291 "cntlid": 121, 00:14:33.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:33.291 "listen_address": { 00:14:33.291 "adrfam": "IPv4", 00:14:33.291 "traddr": "10.0.0.3", 00:14:33.291 "trsvcid": "4420", 00:14:33.291 "trtype": "TCP" 00:14:33.291 }, 00:14:33.291 "peer_address": { 00:14:33.291 "adrfam": "IPv4", 00:14:33.291 "traddr": "10.0.0.1", 00:14:33.291 "trsvcid": "37692", 00:14:33.291 "trtype": "TCP" 00:14:33.291 }, 00:14:33.291 "qid": 0, 00:14:33.291 "state": "enabled", 00:14:33.291 "thread": "nvmf_tgt_poll_group_000" 00:14:33.291 } 00:14:33.291 ]' 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.291 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.856 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:33.856 17:14:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:34.423 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.423 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:34.423 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.423 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.423 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.423 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.423 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:34.423 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:34.991 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:34.991 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.991 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:34.991 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.992 17:14:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.250 00:14:35.250 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.250 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.250 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.509 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.509 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.509 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.509 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.509 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.509 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.509 { 00:14:35.509 "auth": { 00:14:35.509 "dhgroup": "ffdhe4096", 00:14:35.509 "digest": "sha512", 00:14:35.509 "state": "completed" 00:14:35.509 }, 00:14:35.509 "cntlid": 123, 00:14:35.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:35.509 "listen_address": { 00:14:35.509 "adrfam": "IPv4", 00:14:35.509 "traddr": "10.0.0.3", 00:14:35.509 "trsvcid": "4420", 00:14:35.509 "trtype": "TCP" 00:14:35.509 }, 00:14:35.509 "peer_address": { 00:14:35.509 "adrfam": "IPv4", 00:14:35.509 "traddr": "10.0.0.1", 00:14:35.509 "trsvcid": "36690", 00:14:35.509 "trtype": "TCP" 00:14:35.509 }, 00:14:35.509 "qid": 0, 00:14:35.509 "state": "enabled", 00:14:35.509 "thread": "nvmf_tgt_poll_group_000" 00:14:35.509 } 00:14:35.509 ]' 00:14:35.509 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.768 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:35.768 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.768 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:35.768 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.768 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.768 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.768 17:14:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.028 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:36.028 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:36.984 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.984 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:36.984 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.984 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.984 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.984 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.984 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:36.984 17:14:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.243 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.502 00:14:37.502 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.502 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.502 17:14:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.760 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.760 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.760 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.760 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.760 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.760 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.760 { 00:14:37.760 "auth": { 00:14:37.760 "dhgroup": "ffdhe4096", 00:14:37.760 "digest": "sha512", 00:14:37.760 "state": "completed" 00:14:37.760 }, 00:14:37.760 "cntlid": 125, 00:14:37.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:37.760 "listen_address": { 00:14:37.760 "adrfam": "IPv4", 00:14:37.760 "traddr": "10.0.0.3", 00:14:37.760 "trsvcid": "4420", 00:14:37.760 "trtype": "TCP" 00:14:37.760 }, 00:14:37.760 "peer_address": { 00:14:37.760 "adrfam": "IPv4", 00:14:37.760 "traddr": "10.0.0.1", 00:14:37.760 "trsvcid": "36712", 00:14:37.760 "trtype": "TCP" 00:14:37.760 }, 00:14:37.760 "qid": 0, 00:14:37.760 "state": "enabled", 00:14:37.760 "thread": "nvmf_tgt_poll_group_000" 00:14:37.760 } 00:14:37.760 ]' 00:14:37.760 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:38.018 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:38.018 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:38.018 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:38.018 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:38.018 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.019 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.019 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.276 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:38.276 17:14:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:39.208 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.208 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.208 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:39.208 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.208 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.208 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.208 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.208 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:39.208 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:39.467 17:14:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:39.749 00:14:39.749 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.749 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.749 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.315 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.315 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.315 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.315 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.315 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.315 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.315 { 00:14:40.315 "auth": { 00:14:40.315 "dhgroup": "ffdhe4096", 00:14:40.315 "digest": "sha512", 00:14:40.315 "state": "completed" 00:14:40.316 }, 00:14:40.316 "cntlid": 127, 00:14:40.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:40.316 "listen_address": { 00:14:40.316 "adrfam": "IPv4", 00:14:40.316 "traddr": "10.0.0.3", 00:14:40.316 "trsvcid": "4420", 00:14:40.316 "trtype": "TCP" 00:14:40.316 }, 00:14:40.316 "peer_address": { 00:14:40.316 "adrfam": "IPv4", 00:14:40.316 "traddr": "10.0.0.1", 00:14:40.316 "trsvcid": "36746", 00:14:40.316 "trtype": "TCP" 00:14:40.316 }, 00:14:40.316 "qid": 0, 00:14:40.316 "state": "enabled", 00:14:40.316 "thread": "nvmf_tgt_poll_group_000" 00:14:40.316 } 00:14:40.316 ]' 00:14:40.316 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.316 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:40.316 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.316 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:40.316 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.316 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.316 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.316 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.573 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:40.573 17:14:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.506 17:14:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:42.072 00:14:42.072 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:42.072 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:42.072 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.329 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.329 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.329 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.329 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.329 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.330 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.330 { 00:14:42.330 "auth": { 00:14:42.330 "dhgroup": "ffdhe6144", 00:14:42.330 "digest": "sha512", 00:14:42.330 "state": "completed" 00:14:42.330 }, 00:14:42.330 "cntlid": 129, 00:14:42.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:42.330 "listen_address": { 00:14:42.330 "adrfam": "IPv4", 00:14:42.330 "traddr": "10.0.0.3", 00:14:42.330 "trsvcid": "4420", 00:14:42.330 "trtype": "TCP" 00:14:42.330 }, 00:14:42.330 "peer_address": { 00:14:42.330 "adrfam": "IPv4", 00:14:42.330 "traddr": "10.0.0.1", 00:14:42.330 "trsvcid": "36760", 00:14:42.330 "trtype": "TCP" 00:14:42.330 }, 00:14:42.330 "qid": 0, 00:14:42.330 "state": "enabled", 00:14:42.330 "thread": "nvmf_tgt_poll_group_000" 00:14:42.330 } 00:14:42.330 ]' 00:14:42.330 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.588 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.588 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.588 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.588 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.588 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.588 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.588 17:14:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.846 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:42.846 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:43.416 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.416 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:43.416 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.416 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.416 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.416 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.416 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:43.416 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.673 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.930 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.930 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.930 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.930 17:14:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.188 00:14:44.446 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.446 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.446 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.705 { 00:14:44.705 "auth": { 00:14:44.705 "dhgroup": "ffdhe6144", 00:14:44.705 "digest": "sha512", 00:14:44.705 "state": "completed" 00:14:44.705 }, 00:14:44.705 "cntlid": 131, 00:14:44.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:44.705 "listen_address": { 00:14:44.705 "adrfam": "IPv4", 00:14:44.705 "traddr": "10.0.0.3", 00:14:44.705 "trsvcid": "4420", 00:14:44.705 "trtype": "TCP" 00:14:44.705 }, 00:14:44.705 "peer_address": { 00:14:44.705 "adrfam": "IPv4", 00:14:44.705 "traddr": "10.0.0.1", 00:14:44.705 "trsvcid": "37556", 00:14:44.705 "trtype": "TCP" 00:14:44.705 }, 00:14:44.705 "qid": 0, 00:14:44.705 "state": "enabled", 00:14:44.705 "thread": "nvmf_tgt_poll_group_000" 00:14:44.705 } 00:14:44.705 ]' 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.705 17:14:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.964 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:44.964 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:45.960 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.961 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:45.961 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.961 17:14:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.961 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.961 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.961 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:45.961 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:46.219 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:46.219 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.219 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.219 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:46.219 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:46.219 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.219 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.220 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.220 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.220 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.220 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.220 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.220 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.786 00:14:46.786 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.786 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.786 17:14:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.045 { 00:14:47.045 "auth": { 00:14:47.045 "dhgroup": "ffdhe6144", 00:14:47.045 "digest": "sha512", 00:14:47.045 "state": "completed" 00:14:47.045 }, 00:14:47.045 "cntlid": 133, 00:14:47.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:47.045 "listen_address": { 00:14:47.045 "adrfam": "IPv4", 00:14:47.045 "traddr": "10.0.0.3", 00:14:47.045 "trsvcid": "4420", 00:14:47.045 "trtype": "TCP" 00:14:47.045 }, 00:14:47.045 "peer_address": { 00:14:47.045 "adrfam": "IPv4", 00:14:47.045 "traddr": "10.0.0.1", 00:14:47.045 "trsvcid": "37570", 00:14:47.045 "trtype": "TCP" 00:14:47.045 }, 00:14:47.045 "qid": 0, 00:14:47.045 "state": "enabled", 00:14:47.045 "thread": "nvmf_tgt_poll_group_000" 00:14:47.045 } 00:14:47.045 ]' 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:47.045 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.303 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:47.303 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.303 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.303 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.303 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.561 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:47.561 17:14:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:48.494 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.494 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:48.494 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.494 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.494 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.494 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.494 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:48.494 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:48.753 17:14:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:49.320 00:14:49.320 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.320 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.320 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.579 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.579 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.579 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.579 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.579 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.579 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.580 { 00:14:49.580 "auth": { 00:14:49.580 "dhgroup": "ffdhe6144", 00:14:49.580 "digest": "sha512", 00:14:49.580 "state": "completed" 00:14:49.580 }, 00:14:49.580 "cntlid": 135, 00:14:49.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:49.580 "listen_address": { 00:14:49.580 "adrfam": "IPv4", 00:14:49.580 "traddr": "10.0.0.3", 00:14:49.580 "trsvcid": "4420", 00:14:49.580 "trtype": "TCP" 00:14:49.580 }, 00:14:49.580 "peer_address": { 00:14:49.580 "adrfam": "IPv4", 00:14:49.580 "traddr": "10.0.0.1", 00:14:49.580 "trsvcid": "37592", 00:14:49.580 "trtype": "TCP" 00:14:49.580 }, 00:14:49.580 "qid": 0, 00:14:49.580 "state": "enabled", 00:14:49.580 "thread": "nvmf_tgt_poll_group_000" 00:14:49.580 } 00:14:49.580 ]' 00:14:49.580 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.839 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:49.839 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.839 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.839 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.839 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.839 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.839 17:14:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.098 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:50.098 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:14:51.034 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.034 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:51.034 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.034 17:14:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.034 17:14:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:51.971 00:14:51.971 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.971 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.971 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.230 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.230 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.231 { 00:14:52.231 "auth": { 00:14:52.231 "dhgroup": "ffdhe8192", 00:14:52.231 "digest": "sha512", 00:14:52.231 "state": "completed" 00:14:52.231 }, 00:14:52.231 "cntlid": 137, 00:14:52.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:52.231 "listen_address": { 00:14:52.231 "adrfam": "IPv4", 00:14:52.231 "traddr": "10.0.0.3", 00:14:52.231 "trsvcid": "4420", 00:14:52.231 "trtype": "TCP" 00:14:52.231 }, 00:14:52.231 "peer_address": { 00:14:52.231 "adrfam": "IPv4", 00:14:52.231 "traddr": "10.0.0.1", 00:14:52.231 "trsvcid": "37622", 00:14:52.231 "trtype": "TCP" 00:14:52.231 }, 00:14:52.231 "qid": 0, 00:14:52.231 "state": "enabled", 00:14:52.231 "thread": "nvmf_tgt_poll_group_000" 00:14:52.231 } 00:14:52.231 ]' 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.231 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.800 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:52.800 17:14:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:14:53.374 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.375 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:53.375 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.375 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.375 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.375 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.375 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:53.375 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:53.633 17:14:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.571 00:14:54.571 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.571 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.571 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.830 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.830 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.830 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.830 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.830 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.830 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.830 { 00:14:54.830 "auth": { 00:14:54.830 "dhgroup": "ffdhe8192", 00:14:54.830 "digest": "sha512", 00:14:54.830 "state": "completed" 00:14:54.830 }, 00:14:54.830 "cntlid": 139, 00:14:54.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:54.830 "listen_address": { 00:14:54.830 "adrfam": "IPv4", 00:14:54.830 "traddr": "10.0.0.3", 00:14:54.830 "trsvcid": "4420", 00:14:54.830 "trtype": "TCP" 00:14:54.830 }, 00:14:54.830 "peer_address": { 00:14:54.830 "adrfam": "IPv4", 00:14:54.830 "traddr": "10.0.0.1", 00:14:54.830 "trsvcid": "40276", 00:14:54.830 "trtype": "TCP" 00:14:54.830 }, 00:14:54.830 "qid": 0, 00:14:54.830 "state": "enabled", 00:14:54.830 "thread": "nvmf_tgt_poll_group_000" 00:14:54.830 } 00:14:54.830 ]' 00:14:54.830 17:14:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.830 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.830 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.830 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:54.830 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.830 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.830 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.830 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.397 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:55.397 17:14:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: --dhchap-ctrl-secret DHHC-1:02:NzA5ZmMwMmExYjQ0NjA5YjhmZmNjM2JlNjBhODAzYzMyNTk3Mjg2ZmM3YjViYzlh8b7Mkg==: 00:14:55.964 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.964 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:55.964 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.964 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.964 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.964 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.964 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:55.964 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.531 17:14:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.098 00:14:57.098 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.098 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.098 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.355 { 00:14:57.355 "auth": { 00:14:57.355 "dhgroup": "ffdhe8192", 00:14:57.355 "digest": "sha512", 00:14:57.355 "state": "completed" 00:14:57.355 }, 00:14:57.355 "cntlid": 141, 00:14:57.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:57.355 "listen_address": { 00:14:57.355 "adrfam": "IPv4", 00:14:57.355 "traddr": "10.0.0.3", 00:14:57.355 "trsvcid": "4420", 00:14:57.355 "trtype": "TCP" 00:14:57.355 }, 00:14:57.355 "peer_address": { 00:14:57.355 "adrfam": "IPv4", 00:14:57.355 "traddr": "10.0.0.1", 00:14:57.355 "trsvcid": "40300", 00:14:57.355 "trtype": "TCP" 00:14:57.355 }, 00:14:57.355 "qid": 0, 00:14:57.355 "state": "enabled", 00:14:57.355 "thread": "nvmf_tgt_poll_group_000" 00:14:57.355 } 00:14:57.355 ]' 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.355 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.920 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:57.920 17:14:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:01:YmI5NDMzNTM2ZDQwOWZhODZmMjNkYjEyMDM5YjViZDi6fNn5: 00:14:58.484 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.484 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:14:58.484 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.484 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.484 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.484 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.484 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:58.484 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.802 17:14:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:59.733 00:14:59.733 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.733 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.733 17:14:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.733 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.733 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.733 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.733 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.733 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.733 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.733 { 00:14:59.733 "auth": { 00:14:59.733 "dhgroup": "ffdhe8192", 00:14:59.733 "digest": "sha512", 00:14:59.733 "state": "completed" 00:14:59.733 }, 00:14:59.733 "cntlid": 143, 00:14:59.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:14:59.733 "listen_address": { 00:14:59.733 "adrfam": "IPv4", 00:14:59.733 "traddr": "10.0.0.3", 00:14:59.733 "trsvcid": "4420", 00:14:59.733 "trtype": "TCP" 00:14:59.733 }, 00:14:59.733 "peer_address": { 00:14:59.733 "adrfam": "IPv4", 00:14:59.733 "traddr": "10.0.0.1", 00:14:59.733 "trsvcid": "40338", 00:14:59.733 "trtype": "TCP" 00:14:59.733 }, 00:14:59.733 "qid": 0, 00:14:59.733 "state": "enabled", 00:14:59.733 "thread": "nvmf_tgt_poll_group_000" 00:14:59.733 } 00:14:59.733 ]' 00:14:59.733 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.990 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:59.990 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.990 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.990 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.990 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.990 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.990 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.248 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:15:00.248 17:14:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:01.180 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.438 17:14:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:02.004 00:15:02.004 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.004 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.004 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.572 { 00:15:02.572 "auth": { 00:15:02.572 "dhgroup": "ffdhe8192", 00:15:02.572 "digest": "sha512", 00:15:02.572 "state": "completed" 00:15:02.572 }, 00:15:02.572 "cntlid": 145, 00:15:02.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:02.572 "listen_address": { 00:15:02.572 "adrfam": "IPv4", 00:15:02.572 "traddr": "10.0.0.3", 00:15:02.572 "trsvcid": "4420", 00:15:02.572 "trtype": "TCP" 00:15:02.572 }, 00:15:02.572 "peer_address": { 00:15:02.572 "adrfam": "IPv4", 00:15:02.572 "traddr": "10.0.0.1", 00:15:02.572 "trsvcid": "40364", 00:15:02.572 "trtype": "TCP" 00:15:02.572 }, 00:15:02.572 "qid": 0, 00:15:02.572 "state": "enabled", 00:15:02.572 "thread": "nvmf_tgt_poll_group_000" 00:15:02.572 } 00:15:02.572 ]' 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.572 17:14:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.830 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:15:02.830 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:00:MjU5NTQ3ZDQ3ZDJiM2Y3N2IxNTg3YWU1ZDY1YzAzNzQwMTAwYTY1OTQ0Y2M2OWE1qObNVg==: --dhchap-ctrl-secret DHHC-1:03:OGQ3YzE1ODBhN2Y3NjExMDc5YzRhMzkzYjkwYzM2NGJlODViNGJmNGE5ZGY1YThkMThjNjhjOTVkMGMxNTc5OKpuQVw=: 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:03.761 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:03.762 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.762 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:03.762 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.762 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:03.762 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:03.762 17:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:04.327 2024/12/06 17:14:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:04.327 request: 00:15:04.327 { 00:15:04.327 "method": "bdev_nvme_attach_controller", 00:15:04.327 "params": { 00:15:04.327 "name": "nvme0", 00:15:04.327 "trtype": "tcp", 00:15:04.327 "traddr": "10.0.0.3", 00:15:04.327 "adrfam": "ipv4", 00:15:04.327 "trsvcid": "4420", 00:15:04.327 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:04.327 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:04.327 "prchk_reftag": false, 00:15:04.327 "prchk_guard": false, 00:15:04.327 "hdgst": false, 00:15:04.327 "ddgst": false, 00:15:04.327 "dhchap_key": "key2", 00:15:04.327 "allow_unrecognized_csi": false 00:15:04.327 } 00:15:04.327 } 00:15:04.327 Got JSON-RPC error response 00:15:04.327 GoRPCClient: error on JSON-RPC call 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.327 17:14:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:04.907 2024/12/06 17:14:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:04.907 request: 00:15:04.907 { 00:15:04.907 "method": "bdev_nvme_attach_controller", 00:15:04.907 "params": { 00:15:04.907 "name": "nvme0", 00:15:04.907 "trtype": "tcp", 00:15:04.907 "traddr": "10.0.0.3", 00:15:04.907 "adrfam": "ipv4", 00:15:04.907 "trsvcid": "4420", 00:15:04.907 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:04.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:04.907 "prchk_reftag": false, 00:15:04.907 "prchk_guard": false, 00:15:04.907 "hdgst": false, 00:15:04.907 "ddgst": false, 00:15:04.907 "dhchap_key": "key1", 00:15:04.907 "dhchap_ctrlr_key": "ckey2", 00:15:04.907 "allow_unrecognized_csi": false 00:15:04.907 } 00:15:04.907 } 00:15:04.907 Got JSON-RPC error response 00:15:04.907 GoRPCClient: error on JSON-RPC call 00:15:04.907 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.166 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.733 2024/12/06 17:14:47 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:05.733 request: 00:15:05.733 { 00:15:05.733 "method": "bdev_nvme_attach_controller", 00:15:05.733 "params": { 00:15:05.733 "name": "nvme0", 00:15:05.733 "trtype": "tcp", 00:15:05.733 "traddr": "10.0.0.3", 00:15:05.733 "adrfam": "ipv4", 00:15:05.733 "trsvcid": "4420", 00:15:05.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:05.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:05.733 "prchk_reftag": false, 00:15:05.733 "prchk_guard": false, 00:15:05.733 "hdgst": false, 00:15:05.733 "ddgst": false, 00:15:05.733 "dhchap_key": "key1", 00:15:05.733 "dhchap_ctrlr_key": "ckey1", 00:15:05.733 "allow_unrecognized_csi": false 00:15:05.733 } 00:15:05.733 } 00:15:05.733 Got JSON-RPC error response 00:15:05.733 GoRPCClient: error on JSON-RPC call 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.733 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 76337 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76337 ']' 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76337 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76337 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:05.734 killing process with pid 76337 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76337' 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76337 00:15:05.734 17:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76337 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=81385 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 81385 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81385 ']' 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.993 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 81385 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 81385 ']' 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.251 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.818 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.818 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:06.818 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 null0 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kTO 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.eVb ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eVb 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.HFJ 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.B3h ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.B3h 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Bcd 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.BHX ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.BHX 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8Yv 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:06.819 17:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.197 nvme0n1 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.197 { 00:15:08.197 "auth": { 00:15:08.197 "dhgroup": "ffdhe8192", 00:15:08.197 "digest": "sha512", 00:15:08.197 "state": "completed" 00:15:08.197 }, 00:15:08.197 "cntlid": 1, 00:15:08.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:08.197 "listen_address": { 00:15:08.197 "adrfam": "IPv4", 00:15:08.197 "traddr": "10.0.0.3", 00:15:08.197 "trsvcid": "4420", 00:15:08.197 "trtype": "TCP" 00:15:08.197 }, 00:15:08.197 "peer_address": { 00:15:08.197 "adrfam": "IPv4", 00:15:08.197 "traddr": "10.0.0.1", 00:15:08.197 "trsvcid": "34378", 00:15:08.197 "trtype": "TCP" 00:15:08.197 }, 00:15:08.197 "qid": 0, 00:15:08.197 "state": "enabled", 00:15:08.197 "thread": "nvmf_tgt_poll_group_000" 00:15:08.197 } 00:15:08.197 ]' 00:15:08.197 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.456 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.456 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.456 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:08.456 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.456 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.456 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.456 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.714 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:15:08.714 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key3 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:09.651 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.910 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.169 2024/12/06 17:14:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:10.169 request: 00:15:10.169 { 00:15:10.169 "method": "bdev_nvme_attach_controller", 00:15:10.169 "params": { 00:15:10.169 "name": "nvme0", 00:15:10.169 "trtype": "tcp", 00:15:10.169 "traddr": "10.0.0.3", 00:15:10.169 "adrfam": "ipv4", 00:15:10.169 "trsvcid": "4420", 00:15:10.169 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:10.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:10.169 "prchk_reftag": false, 00:15:10.169 "prchk_guard": false, 00:15:10.169 "hdgst": false, 00:15:10.169 "ddgst": false, 00:15:10.169 "dhchap_key": "key3", 00:15:10.169 "allow_unrecognized_csi": false 00:15:10.169 } 00:15:10.169 } 00:15:10.169 Got JSON-RPC error response 00:15:10.169 GoRPCClient: error on JSON-RPC call 00:15:10.169 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:10.169 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:10.169 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:10.169 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:10.169 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:10.169 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:10.169 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:10.169 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.736 17:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:10.995 2024/12/06 17:14:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:10.995 request: 00:15:10.995 { 00:15:10.995 "method": "bdev_nvme_attach_controller", 00:15:10.995 "params": { 00:15:10.995 "name": "nvme0", 00:15:10.995 "trtype": "tcp", 00:15:10.995 "traddr": "10.0.0.3", 00:15:10.995 "adrfam": "ipv4", 00:15:10.995 "trsvcid": "4420", 00:15:10.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:10.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:10.995 "prchk_reftag": false, 00:15:10.995 "prchk_guard": false, 00:15:10.995 "hdgst": false, 00:15:10.995 "ddgst": false, 00:15:10.995 "dhchap_key": "key3", 00:15:10.995 "allow_unrecognized_csi": false 00:15:10.995 } 00:15:10.995 } 00:15:10.995 Got JSON-RPC error response 00:15:10.995 GoRPCClient: error on JSON-RPC call 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:10.995 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.254 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:11.822 2024/12/06 17:14:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:11.822 request: 00:15:11.822 { 00:15:11.822 "method": "bdev_nvme_attach_controller", 00:15:11.822 "params": { 00:15:11.822 "name": "nvme0", 00:15:11.822 "trtype": "tcp", 00:15:11.822 "traddr": "10.0.0.3", 00:15:11.822 "adrfam": "ipv4", 00:15:11.822 "trsvcid": "4420", 00:15:11.822 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:11.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:11.822 "prchk_reftag": false, 00:15:11.822 "prchk_guard": false, 00:15:11.822 "hdgst": false, 00:15:11.822 "ddgst": false, 00:15:11.822 "dhchap_key": "key0", 00:15:11.822 "dhchap_ctrlr_key": "key1", 00:15:11.822 "allow_unrecognized_csi": false 00:15:11.822 } 00:15:11.822 } 00:15:11.822 Got JSON-RPC error response 00:15:11.822 GoRPCClient: error on JSON-RPC call 00:15:11.822 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:11.822 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.822 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.822 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.822 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:11.822 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:11.822 17:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:12.081 nvme0n1 00:15:12.340 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:12.340 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:12.340 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.598 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.599 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.599 17:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.857 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 00:15:12.857 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.857 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.857 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.857 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:12.857 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:12.857 17:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:13.790 nvme0n1 00:15:13.790 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:13.790 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.790 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:14.353 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.353 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:14.353 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.353 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.353 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.353 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:14.353 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.353 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:14.620 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.620 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:15:14.620 17:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -l 0 --dhchap-secret DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: --dhchap-ctrl-secret DHHC-1:03:ZjNmMTIyMDM4MmJiOTZlYmRiNzliYjliODY2NzFiMjk4NDQwZThjZDRiMTE2YjQ1ODA0MjM3MGIzYjdkY2Q1NYuGR2M=: 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.208 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:15.507 17:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:16.441 2024/12/06 17:14:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:15:16.441 request: 00:15:16.441 { 00:15:16.441 "method": "bdev_nvme_attach_controller", 00:15:16.441 "params": { 00:15:16.441 "name": "nvme0", 00:15:16.441 "trtype": "tcp", 00:15:16.441 "traddr": "10.0.0.3", 00:15:16.441 "adrfam": "ipv4", 00:15:16.441 "trsvcid": "4420", 00:15:16.441 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:16.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56", 00:15:16.441 "prchk_reftag": false, 00:15:16.441 "prchk_guard": false, 00:15:16.441 "hdgst": false, 00:15:16.441 "ddgst": false, 00:15:16.441 "dhchap_key": "key1", 00:15:16.441 "allow_unrecognized_csi": false 00:15:16.441 } 00:15:16.441 } 00:15:16.441 Got JSON-RPC error response 00:15:16.441 GoRPCClient: error on JSON-RPC call 00:15:16.441 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:16.441 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.441 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.441 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.441 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.441 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:16.441 17:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:17.372 nvme0n1 00:15:17.372 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:17.372 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.372 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:17.630 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.630 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.630 17:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.888 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:17.888 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.888 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.888 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.888 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:17.888 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:17.888 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:18.455 nvme0n1 00:15:18.455 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:18.455 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:18.455 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.714 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.714 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.714 17:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: '' 2s 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: ]] 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MGFiMzU4MTg3NjBhNTE3NGM2MTNiMzk5ZWZhNzE2MzNZg7y6: 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:18.973 17:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:20.873 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:20.873 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:20.873 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:20.873 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:20.873 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:20.873 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: 2s 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: ]] 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:M2U5MjU3YmM1ZTJhZDBmZGZjMTgwN2IwN2RlMzhiNDUzMGViNjMwYTVjNWE1MTJmiWIRoQ==: 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:21.130 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:23.050 17:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:24.049 nvme0n1 00:15:24.049 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:24.049 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.049 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.049 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.049 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:24.049 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:24.983 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:24.983 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:24.983 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.241 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.241 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:25.241 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.241 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.241 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.241 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:25.241 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:25.500 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:25.500 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:25.500 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.758 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:25.759 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:26.694 2024/12/06 17:15:08 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:26.694 request: 00:15:26.694 { 00:15:26.694 "method": "bdev_nvme_set_keys", 00:15:26.694 "params": { 00:15:26.694 "name": "nvme0", 00:15:26.694 "dhchap_key": "key1", 00:15:26.694 "dhchap_ctrlr_key": "key3" 00:15:26.694 } 00:15:26.694 } 00:15:26.694 Got JSON-RPC error response 00:15:26.694 GoRPCClient: error on JSON-RPC call 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:26.694 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:28.067 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:28.067 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:28.067 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.067 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:28.067 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:28.067 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.067 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.067 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.067 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:28.067 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:28.067 17:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:28.999 nvme0n1 00:15:28.999 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:28.999 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.999 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:29.256 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:29.821 2024/12/06 17:15:12 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:15:29.821 request: 00:15:29.821 { 00:15:29.821 "method": "bdev_nvme_set_keys", 00:15:29.821 "params": { 00:15:29.821 "name": "nvme0", 00:15:29.821 "dhchap_key": "key2", 00:15:29.821 "dhchap_ctrlr_key": "key0" 00:15:29.821 } 00:15:29.821 } 00:15:29.821 Got JSON-RPC error response 00:15:29.821 GoRPCClient: error on JSON-RPC call 00:15:29.821 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:29.821 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:29.821 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:29.821 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:29.821 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:29.821 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:29.821 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.078 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:30.078 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 76381 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 76381 ']' 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 76381 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76381 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:31.452 killing process with pid 76381 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76381' 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 76381 00:15:31.452 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 76381 00:15:31.710 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:31.710 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:31.710 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:31.710 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:31.710 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:31.710 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:31.710 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:31.710 rmmod nvme_tcp 00:15:31.710 rmmod nvme_fabrics 00:15:31.969 rmmod nvme_keyring 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 81385 ']' 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 81385 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 81385 ']' 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 81385 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81385 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:31.969 killing process with pid 81385 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81385' 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 81385 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 81385 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:31.969 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kTO /tmp/spdk.key-sha256.HFJ /tmp/spdk.key-sha384.Bcd /tmp/spdk.key-sha512.8Yv /tmp/spdk.key-sha512.eVb /tmp/spdk.key-sha384.B3h /tmp/spdk.key-sha256.BHX '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:15:32.228 00:15:32.228 real 3m26.855s 00:15:32.228 user 8m24.405s 00:15:32.228 sys 0m24.197s 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.228 ************************************ 00:15:32.228 END TEST nvmf_auth_target 00:15:32.228 ************************************ 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.228 ************************************ 00:15:32.228 START TEST nvmf_bdevio_no_huge 00:15:32.228 ************************************ 00:15:32.228 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:32.489 * Looking for test storage... 00:15:32.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.489 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:32.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.490 --rc genhtml_branch_coverage=1 00:15:32.490 --rc genhtml_function_coverage=1 00:15:32.490 --rc genhtml_legend=1 00:15:32.490 --rc geninfo_all_blocks=1 00:15:32.490 --rc geninfo_unexecuted_blocks=1 00:15:32.490 00:15:32.490 ' 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:32.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.490 --rc genhtml_branch_coverage=1 00:15:32.490 --rc genhtml_function_coverage=1 00:15:32.490 --rc genhtml_legend=1 00:15:32.490 --rc geninfo_all_blocks=1 00:15:32.490 --rc geninfo_unexecuted_blocks=1 00:15:32.490 00:15:32.490 ' 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:32.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.490 --rc genhtml_branch_coverage=1 00:15:32.490 --rc genhtml_function_coverage=1 00:15:32.490 --rc genhtml_legend=1 00:15:32.490 --rc geninfo_all_blocks=1 00:15:32.490 --rc geninfo_unexecuted_blocks=1 00:15:32.490 00:15:32.490 ' 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:32.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.490 --rc genhtml_branch_coverage=1 00:15:32.490 --rc genhtml_function_coverage=1 00:15:32.490 --rc genhtml_legend=1 00:15:32.490 --rc geninfo_all_blocks=1 00:15:32.490 --rc geninfo_unexecuted_blocks=1 00:15:32.490 00:15:32.490 ' 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.490 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.491 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:32.491 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:32.492 Cannot find device "nvmf_init_br" 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:32.492 Cannot find device "nvmf_init_br2" 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:32.492 Cannot find device "nvmf_tgt_br" 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:32.492 Cannot find device "nvmf_tgt_br2" 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:32.492 Cannot find device "nvmf_init_br" 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:32.492 Cannot find device "nvmf_init_br2" 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:32.492 Cannot find device "nvmf_tgt_br" 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:32.492 Cannot find device "nvmf_tgt_br2" 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:15:32.492 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:32.751 Cannot find device "nvmf_br" 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:32.751 Cannot find device "nvmf_init_if" 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:32.751 Cannot find device "nvmf_init_if2" 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:32.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:32.751 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:32.751 17:15:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.751 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.751 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.751 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:32.751 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:32.751 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.751 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:33.010 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:33.010 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:15:33.010 00:15:33.010 --- 10.0.0.3 ping statistics --- 00:15:33.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.010 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:33.010 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:33.010 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:15:33.010 00:15:33.010 --- 10.0.0.4 ping statistics --- 00:15:33.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.010 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:33.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:33.010 00:15:33.010 --- 10.0.0.1 ping statistics --- 00:15:33.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.010 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:33.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:15:33.010 00:15:33.010 --- 10.0.0.2 ping statistics --- 00:15:33.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.010 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=82264 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 82264 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 82264 ']' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.010 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.010 [2024-12-06 17:15:15.227426] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:15:33.010 [2024-12-06 17:15:15.227528] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:33.268 [2024-12-06 17:15:15.381504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.268 [2024-12-06 17:15:15.461326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.268 [2024-12-06 17:15:15.461390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.268 [2024-12-06 17:15:15.461402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.268 [2024-12-06 17:15:15.461411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.268 [2024-12-06 17:15:15.461418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.268 [2024-12-06 17:15:15.462086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:33.268 [2024-12-06 17:15:15.462146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:33.268 [2024-12-06 17:15:15.462215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:33.268 [2024-12-06 17:15:15.462222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.571 [2024-12-06 17:15:15.642070] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.571 Malloc0 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:33.571 [2024-12-06 17:15:15.682316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:33.571 { 00:15:33.571 "params": { 00:15:33.571 "name": "Nvme$subsystem", 00:15:33.571 "trtype": "$TEST_TRANSPORT", 00:15:33.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.571 "adrfam": "ipv4", 00:15:33.571 "trsvcid": "$NVMF_PORT", 00:15:33.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.571 "hdgst": ${hdgst:-false}, 00:15:33.571 "ddgst": ${ddgst:-false} 00:15:33.571 }, 00:15:33.571 "method": "bdev_nvme_attach_controller" 00:15:33.571 } 00:15:33.571 EOF 00:15:33.571 )") 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:33.571 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:33.571 "params": { 00:15:33.571 "name": "Nvme1", 00:15:33.571 "trtype": "tcp", 00:15:33.571 "traddr": "10.0.0.3", 00:15:33.571 "adrfam": "ipv4", 00:15:33.571 "trsvcid": "4420", 00:15:33.571 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.571 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.571 "hdgst": false, 00:15:33.571 "ddgst": false 00:15:33.571 }, 00:15:33.571 "method": "bdev_nvme_attach_controller" 00:15:33.571 }' 00:15:33.571 [2024-12-06 17:15:15.742789] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:15:33.571 [2024-12-06 17:15:15.742882] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82301 ] 00:15:33.842 [2024-12-06 17:15:15.892023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:33.842 [2024-12-06 17:15:15.958120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.842 [2024-12-06 17:15:15.958113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.842 [2024-12-06 17:15:15.958038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.100 I/O targets: 00:15:34.100 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:34.100 00:15:34.100 00:15:34.100 CUnit - A unit testing framework for C - Version 2.1-3 00:15:34.100 http://cunit.sourceforge.net/ 00:15:34.100 00:15:34.100 00:15:34.100 Suite: bdevio tests on: Nvme1n1 00:15:34.100 Test: blockdev write read block ...passed 00:15:34.100 Test: blockdev write zeroes read block ...passed 00:15:34.100 Test: blockdev write zeroes read no split ...passed 00:15:34.100 Test: blockdev write zeroes read split ...passed 00:15:34.100 Test: blockdev write zeroes read split partial ...passed 00:15:34.100 Test: blockdev reset ...[2024-12-06 17:15:16.306315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:34.100 [2024-12-06 17:15:16.306449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2432eb0 (9): Bad file descriptor 00:15:34.100 [2024-12-06 17:15:16.324779] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:34.100 passed 00:15:34.100 Test: blockdev write read 8 blocks ...passed 00:15:34.100 Test: blockdev write read size > 128k ...passed 00:15:34.100 Test: blockdev write read invalid size ...passed 00:15:34.100 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:34.100 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:34.100 Test: blockdev write read max offset ...passed 00:15:34.359 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:34.359 Test: blockdev writev readv 8 blocks ...passed 00:15:34.359 Test: blockdev writev readv 30 x 1block ...passed 00:15:34.359 Test: blockdev writev readv block ...passed 00:15:34.359 Test: blockdev writev readv size > 128k ...passed 00:15:34.359 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:34.359 Test: blockdev comparev and writev ...[2024-12-06 17:15:16.499973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.359 [2024-12-06 17:15:16.500034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.500056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.359 [2024-12-06 17:15:16.500067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.500677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.359 [2024-12-06 17:15:16.500706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.500725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.359 [2024-12-06 17:15:16.500736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.501188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.359 [2024-12-06 17:15:16.501224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.501246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.359 [2024-12-06 17:15:16.501262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.501763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.359 [2024-12-06 17:15:16.501792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.501810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:34.359 [2024-12-06 17:15:16.501820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:34.359 passed 00:15:34.359 Test: blockdev nvme passthru rw ...passed 00:15:34.359 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:15:16.583805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.359 [2024-12-06 17:15:16.583873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.584007] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.359 [2024-12-06 17:15:16.584034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.584147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.359 [2024-12-06 17:15:16.584168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:34.359 [2024-12-06 17:15:16.584303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:34.359 [2024-12-06 17:15:16.584329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:34.359 passed 00:15:34.359 Test: blockdev nvme admin passthru ...passed 00:15:34.359 Test: blockdev copy ...passed 00:15:34.359 00:15:34.359 Run Summary: Type Total Ran Passed Failed Inactive 00:15:34.359 suites 1 1 n/a 0 0 00:15:34.359 tests 23 23 23 0 0 00:15:34.359 asserts 152 152 152 0 n/a 00:15:34.359 00:15:34.359 Elapsed time = 0.947 seconds 00:15:34.926 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:34.926 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.926 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:34.926 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:34.927 rmmod nvme_tcp 00:15:34.927 rmmod nvme_fabrics 00:15:34.927 rmmod nvme_keyring 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 82264 ']' 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 82264 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 82264 ']' 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 82264 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82264 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:34.927 killing process with pid 82264 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82264' 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 82264 00:15:34.927 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 82264 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.493 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.494 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:35.494 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.494 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.494 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.494 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:15:35.494 00:15:35.494 real 0m3.297s 00:15:35.494 user 0m10.329s 00:15:35.494 sys 0m1.377s 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:35.752 ************************************ 00:15:35.752 END TEST nvmf_bdevio_no_huge 00:15:35.752 ************************************ 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.752 ************************************ 00:15:35.752 START TEST nvmf_tls 00:15:35.752 ************************************ 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:35.752 * Looking for test storage... 00:15:35.752 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:15:35.752 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:35.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.752 --rc genhtml_branch_coverage=1 00:15:35.752 --rc genhtml_function_coverage=1 00:15:35.752 --rc genhtml_legend=1 00:15:35.752 --rc geninfo_all_blocks=1 00:15:35.752 --rc geninfo_unexecuted_blocks=1 00:15:35.752 00:15:35.752 ' 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:35.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.752 --rc genhtml_branch_coverage=1 00:15:35.752 --rc genhtml_function_coverage=1 00:15:35.752 --rc genhtml_legend=1 00:15:35.752 --rc geninfo_all_blocks=1 00:15:35.752 --rc geninfo_unexecuted_blocks=1 00:15:35.752 00:15:35.752 ' 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:35.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.752 --rc genhtml_branch_coverage=1 00:15:35.752 --rc genhtml_function_coverage=1 00:15:35.752 --rc genhtml_legend=1 00:15:35.752 --rc geninfo_all_blocks=1 00:15:35.752 --rc geninfo_unexecuted_blocks=1 00:15:35.752 00:15:35.752 ' 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:35.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.752 --rc genhtml_branch_coverage=1 00:15:35.752 --rc genhtml_function_coverage=1 00:15:35.752 --rc genhtml_legend=1 00:15:35.752 --rc geninfo_all_blocks=1 00:15:35.752 --rc geninfo_unexecuted_blocks=1 00:15:35.752 00:15:35.752 ' 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.752 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.753 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:36.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:36.011 Cannot find device "nvmf_init_br" 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:15:36.011 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:36.012 Cannot find device "nvmf_init_br2" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:36.012 Cannot find device "nvmf_tgt_br" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.012 Cannot find device "nvmf_tgt_br2" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:36.012 Cannot find device "nvmf_init_br" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:36.012 Cannot find device "nvmf_init_br2" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:36.012 Cannot find device "nvmf_tgt_br" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:36.012 Cannot find device "nvmf_tgt_br2" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:36.012 Cannot find device "nvmf_br" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:36.012 Cannot find device "nvmf_init_if" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:36.012 Cannot find device "nvmf_init_if2" 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.012 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:36.012 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:36.271 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:36.271 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:15:36.271 00:15:36.271 --- 10.0.0.3 ping statistics --- 00:15:36.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.271 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:36.271 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:36.271 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:15:36.271 00:15:36.271 --- 10.0.0.4 ping statistics --- 00:15:36.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.271 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:36.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:15:36.271 00:15:36.271 --- 10.0.0.1 ping statistics --- 00:15:36.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.271 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:36.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:36.271 00:15:36.271 --- 10.0.0.2 ping statistics --- 00:15:36.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.271 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82542 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:36.271 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82542 00:15:36.272 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82542 ']' 00:15:36.272 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.272 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.272 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.272 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.272 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.272 [2024-12-06 17:15:18.559764] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:15:36.272 [2024-12-06 17:15:18.559910] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.550 [2024-12-06 17:15:18.723495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.550 [2024-12-06 17:15:18.764673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.550 [2024-12-06 17:15:18.764725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.550 [2024-12-06 17:15:18.764738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.550 [2024-12-06 17:15:18.764748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.550 [2024-12-06 17:15:18.764756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.550 [2024-12-06 17:15:18.765094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.550 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.550 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:36.550 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:36.550 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.550 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.808 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.808 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:36.808 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:37.065 true 00:15:37.065 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:37.065 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:37.323 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:37.323 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:37.323 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:37.891 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:37.891 17:15:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:38.150 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:38.150 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:38.150 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:38.409 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:38.409 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:38.978 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:38.978 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:38.978 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:38.978 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:38.978 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:38.978 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:38.979 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:39.546 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:39.546 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:39.804 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:39.804 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:39.804 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:40.061 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:40.061 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.1gS3bvzoQX 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.SlYDaBArZ4 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.1gS3bvzoQX 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.SlYDaBArZ4 00:15:40.317 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:40.909 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:41.167 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.1gS3bvzoQX 00:15:41.167 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.1gS3bvzoQX 00:15:41.167 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:41.423 [2024-12-06 17:15:23.470344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:41.423 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:41.681 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:41.938 [2024-12-06 17:15:24.062457] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:41.938 [2024-12-06 17:15:24.062706] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:41.938 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:42.196 malloc0 00:15:42.196 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:42.453 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.1gS3bvzoQX 00:15:42.711 17:15:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:42.969 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.1gS3bvzoQX 00:15:55.168 Initializing NVMe Controllers 00:15:55.168 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:15:55.168 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:55.168 Initialization complete. Launching workers. 00:15:55.168 ======================================================== 00:15:55.168 Latency(us) 00:15:55.168 Device Information : IOPS MiB/s Average min max 00:15:55.168 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9163.58 35.80 6985.81 1406.43 14033.18 00:15:55.168 ======================================================== 00:15:55.168 Total : 9163.58 35.80 6985.81 1406.43 14033.18 00:15:55.168 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.1gS3bvzoQX 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1gS3bvzoQX 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82906 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82906 /var/tmp/bdevperf.sock 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82906 ']' 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.168 17:15:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:55.168 [2024-12-06 17:15:35.469152] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:15:55.169 [2024-12-06 17:15:35.469276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82906 ] 00:15:55.169 [2024-12-06 17:15:35.662042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.169 [2024-12-06 17:15:35.711246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.169 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.169 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:55.169 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1gS3bvzoQX 00:15:55.169 17:15:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:55.169 [2024-12-06 17:15:37.121584] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:55.169 TLSTESTn1 00:15:55.169 17:15:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:55.169 Running I/O for 10 seconds... 00:15:57.476 3857.00 IOPS, 15.07 MiB/s [2024-12-06T17:15:40.706Z] 3881.50 IOPS, 15.16 MiB/s [2024-12-06T17:15:41.638Z] 3917.33 IOPS, 15.30 MiB/s [2024-12-06T17:15:42.708Z] 3946.50 IOPS, 15.42 MiB/s [2024-12-06T17:15:43.645Z] 3943.20 IOPS, 15.40 MiB/s [2024-12-06T17:15:44.580Z] 3938.33 IOPS, 15.38 MiB/s [2024-12-06T17:15:45.515Z] 3938.00 IOPS, 15.38 MiB/s [2024-12-06T17:15:46.448Z] 3940.50 IOPS, 15.39 MiB/s [2024-12-06T17:15:47.381Z] 3939.22 IOPS, 15.39 MiB/s [2024-12-06T17:15:47.381Z] 3933.80 IOPS, 15.37 MiB/s 00:16:05.083 Latency(us) 00:16:05.083 [2024-12-06T17:15:47.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.083 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:05.083 Verification LBA range: start 0x0 length 0x2000 00:16:05.083 TLSTESTn1 : 10.02 3940.18 15.39 0.00 0.00 32426.59 5808.87 33363.78 00:16:05.083 [2024-12-06T17:15:47.381Z] =================================================================================================================== 00:16:05.083 [2024-12-06T17:15:47.382Z] Total : 3940.18 15.39 0.00 0.00 32426.59 5808.87 33363.78 00:16:05.084 { 00:16:05.084 "results": [ 00:16:05.084 { 00:16:05.084 "job": "TLSTESTn1", 00:16:05.084 "core_mask": "0x4", 00:16:05.084 "workload": "verify", 00:16:05.084 "status": "finished", 00:16:05.084 "verify_range": { 00:16:05.084 "start": 0, 00:16:05.084 "length": 8192 00:16:05.084 }, 00:16:05.084 "queue_depth": 128, 00:16:05.084 "io_size": 4096, 00:16:05.084 "runtime": 10.015532, 00:16:05.084 "iops": 3940.180112249654, 00:16:05.084 "mibps": 15.391328563475211, 00:16:05.084 "io_failed": 0, 00:16:05.084 "io_timeout": 0, 00:16:05.084 "avg_latency_us": 32426.594357246024, 00:16:05.084 "min_latency_us": 5808.872727272727, 00:16:05.084 "max_latency_us": 33363.781818181815 00:16:05.084 } 00:16:05.084 ], 00:16:05.084 "core_count": 1 00:16:05.084 } 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82906 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82906 ']' 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82906 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82906 00:16:05.342 killing process with pid 82906 00:16:05.342 Received shutdown signal, test time was about 10.000000 seconds 00:16:05.342 00:16:05.342 Latency(us) 00:16:05.342 [2024-12-06T17:15:47.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.342 [2024-12-06T17:15:47.640Z] =================================================================================================================== 00:16:05.342 [2024-12-06T17:15:47.640Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82906' 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82906 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82906 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlYDaBArZ4 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlYDaBArZ4 00:16:05.342 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlYDaBArZ4 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SlYDaBArZ4 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83065 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83065 /var/tmp/bdevperf.sock 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83065 ']' 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.343 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:05.343 [2024-12-06 17:15:47.635125] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:05.343 [2024-12-06 17:15:47.635748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83065 ] 00:16:05.600 [2024-12-06 17:15:47.788525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.600 [2024-12-06 17:15:47.844372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.858 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.858 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:05.858 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SlYDaBArZ4 00:16:06.115 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:06.372 [2024-12-06 17:15:48.629712] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:06.372 [2024-12-06 17:15:48.640066] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:06.372 [2024-12-06 17:15:48.640360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb65620 (107): Transport endpoint is not connected 00:16:06.372 [2024-12-06 17:15:48.641349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb65620 (9): Bad file descriptor 00:16:06.372 [2024-12-06 17:15:48.642346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:06.372 [2024-12-06 17:15:48.642375] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:06.372 [2024-12-06 17:15:48.642386] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:06.372 [2024-12-06 17:15:48.642403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:06.372 2024/12/06 17:15:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:06.372 request: 00:16:06.372 { 00:16:06.372 "method": "bdev_nvme_attach_controller", 00:16:06.372 "params": { 00:16:06.372 "name": "TLSTEST", 00:16:06.372 "trtype": "tcp", 00:16:06.372 "traddr": "10.0.0.3", 00:16:06.372 "adrfam": "ipv4", 00:16:06.372 "trsvcid": "4420", 00:16:06.372 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:06.372 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:06.372 "prchk_reftag": false, 00:16:06.372 "prchk_guard": false, 00:16:06.372 "hdgst": false, 00:16:06.372 "ddgst": false, 00:16:06.372 "psk": "key0", 00:16:06.372 "allow_unrecognized_csi": false 00:16:06.372 } 00:16:06.372 } 00:16:06.372 Got JSON-RPC error response 00:16:06.372 GoRPCClient: error on JSON-RPC call 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83065 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83065 ']' 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83065 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83065 00:16:06.630 killing process with pid 83065 00:16:06.630 Received shutdown signal, test time was about 10.000000 seconds 00:16:06.630 00:16:06.630 Latency(us) 00:16:06.630 [2024-12-06T17:15:48.928Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.630 [2024-12-06T17:15:48.928Z] =================================================================================================================== 00:16:06.630 [2024-12-06T17:15:48.928Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83065' 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83065 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83065 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1gS3bvzoQX 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1gS3bvzoQX 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:06.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.1gS3bvzoQX 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1gS3bvzoQX 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83104 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83104 /var/tmp/bdevperf.sock 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83104 ']' 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.630 17:15:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:06.630 [2024-12-06 17:15:48.880238] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:06.630 [2024-12-06 17:15:48.880350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83104 ] 00:16:06.888 [2024-12-06 17:15:49.022178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.888 [2024-12-06 17:15:49.055175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.888 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.888 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:06.888 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1gS3bvzoQX 00:16:07.452 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:07.711 [2024-12-06 17:15:49.800276] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:07.711 [2024-12-06 17:15:49.809946] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:07.711 [2024-12-06 17:15:49.809994] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:07.711 [2024-12-06 17:15:49.810051] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:07.711 [2024-12-06 17:15:49.810979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277620 (107): Transport endpoint is not connected 00:16:07.711 [2024-12-06 17:15:49.811966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1277620 (9): Bad file descriptor 00:16:07.711 [2024-12-06 17:15:49.812962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:07.711 [2024-12-06 17:15:49.812987] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:07.711 [2024-12-06 17:15:49.812999] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:07.711 [2024-12-06 17:15:49.813015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:07.711 2024/12/06 17:15:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:07.711 request: 00:16:07.711 { 00:16:07.711 "method": "bdev_nvme_attach_controller", 00:16:07.711 "params": { 00:16:07.711 "name": "TLSTEST", 00:16:07.711 "trtype": "tcp", 00:16:07.711 "traddr": "10.0.0.3", 00:16:07.711 "adrfam": "ipv4", 00:16:07.711 "trsvcid": "4420", 00:16:07.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.711 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:07.711 "prchk_reftag": false, 00:16:07.711 "prchk_guard": false, 00:16:07.711 "hdgst": false, 00:16:07.711 "ddgst": false, 00:16:07.711 "psk": "key0", 00:16:07.711 "allow_unrecognized_csi": false 00:16:07.711 } 00:16:07.711 } 00:16:07.711 Got JSON-RPC error response 00:16:07.711 GoRPCClient: error on JSON-RPC call 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83104 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83104 ']' 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83104 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83104 00:16:07.711 killing process with pid 83104 00:16:07.711 Received shutdown signal, test time was about 10.000000 seconds 00:16:07.711 00:16:07.711 Latency(us) 00:16:07.711 [2024-12-06T17:15:50.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.711 [2024-12-06T17:15:50.009Z] =================================================================================================================== 00:16:07.711 [2024-12-06T17:15:50.009Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83104' 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83104 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83104 00:16:07.711 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1gS3bvzoQX 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1gS3bvzoQX 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.1gS3bvzoQX 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:07.712 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:07.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.1gS3bvzoQX 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83143 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83143 /var/tmp/bdevperf.sock 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83143 ']' 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.712 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:07.970 [2024-12-06 17:15:50.061199] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:07.970 [2024-12-06 17:15:50.061312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83143 ] 00:16:07.970 [2024-12-06 17:15:50.205550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.970 [2024-12-06 17:15:50.257352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.228 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.228 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:08.228 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.1gS3bvzoQX 00:16:08.491 17:15:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:08.756 [2024-12-06 17:15:51.008096] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:08.756 [2024-12-06 17:15:51.012988] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:08.756 [2024-12-06 17:15:51.013048] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:08.756 [2024-12-06 17:15:51.013119] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:08.756 [2024-12-06 17:15:51.013698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570620 (107): Transport endpoint is not connected 00:16:08.756 [2024-12-06 17:15:51.014683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570620 (9): Bad file descriptor 00:16:08.756 [2024-12-06 17:15:51.015679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:08.756 [2024-12-06 17:15:51.015702] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:16:08.756 [2024-12-06 17:15:51.015715] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:08.756 [2024-12-06 17:15:51.015730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:08.756 2024/12/06 17:15:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:16:08.756 request: 00:16:08.756 { 00:16:08.756 "method": "bdev_nvme_attach_controller", 00:16:08.756 "params": { 00:16:08.756 "name": "TLSTEST", 00:16:08.756 "trtype": "tcp", 00:16:08.756 "traddr": "10.0.0.3", 00:16:08.756 "adrfam": "ipv4", 00:16:08.756 "trsvcid": "4420", 00:16:08.756 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:08.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:08.756 "prchk_reftag": false, 00:16:08.756 "prchk_guard": false, 00:16:08.756 "hdgst": false, 00:16:08.756 "ddgst": false, 00:16:08.756 "psk": "key0", 00:16:08.756 "allow_unrecognized_csi": false 00:16:08.756 } 00:16:08.756 } 00:16:08.756 Got JSON-RPC error response 00:16:08.756 GoRPCClient: error on JSON-RPC call 00:16:08.756 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83143 00:16:08.756 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83143 ']' 00:16:08.756 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83143 00:16:08.756 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:08.756 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:08.756 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83143 00:16:09.015 killing process with pid 83143 00:16:09.015 Received shutdown signal, test time was about 10.000000 seconds 00:16:09.015 00:16:09.015 Latency(us) 00:16:09.015 [2024-12-06T17:15:51.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.015 [2024-12-06T17:15:51.313Z] =================================================================================================================== 00:16:09.015 [2024-12-06T17:15:51.313Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83143' 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83143 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83143 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83182 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83182 /var/tmp/bdevperf.sock 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83182 ']' 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.015 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:09.015 [2024-12-06 17:15:51.261818] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:09.015 [2024-12-06 17:15:51.261924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83182 ] 00:16:09.273 [2024-12-06 17:15:51.404521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.273 [2024-12-06 17:15:51.437912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:09.273 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:09.273 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:09.273 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:09.531 [2024-12-06 17:15:51.794095] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:09.531 [2024-12-06 17:15:51.794149] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:09.531 2024/12/06 17:15:51 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:09.531 request: 00:16:09.531 { 00:16:09.531 "method": "keyring_file_add_key", 00:16:09.531 "params": { 00:16:09.531 "name": "key0", 00:16:09.531 "path": "" 00:16:09.531 } 00:16:09.531 } 00:16:09.531 Got JSON-RPC error response 00:16:09.531 GoRPCClient: error on JSON-RPC call 00:16:09.531 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:09.789 [2024-12-06 17:15:52.074278] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:09.789 [2024-12-06 17:15:52.074352] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:09.789 2024/12/06 17:15:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:09.789 request: 00:16:09.789 { 00:16:09.789 "method": "bdev_nvme_attach_controller", 00:16:09.789 "params": { 00:16:09.789 "name": "TLSTEST", 00:16:09.789 "trtype": "tcp", 00:16:09.789 "traddr": "10.0.0.3", 00:16:09.789 "adrfam": "ipv4", 00:16:09.789 "trsvcid": "4420", 00:16:09.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:09.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:09.789 "prchk_reftag": false, 00:16:09.790 "prchk_guard": false, 00:16:09.790 "hdgst": false, 00:16:09.790 "ddgst": false, 00:16:09.790 "psk": "key0", 00:16:09.790 "allow_unrecognized_csi": false 00:16:09.790 } 00:16:09.790 } 00:16:09.790 Got JSON-RPC error response 00:16:09.790 GoRPCClient: error on JSON-RPC call 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83182 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83182 ']' 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83182 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83182 00:16:10.049 killing process with pid 83182 00:16:10.049 Received shutdown signal, test time was about 10.000000 seconds 00:16:10.049 00:16:10.049 Latency(us) 00:16:10.049 [2024-12-06T17:15:52.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.049 [2024-12-06T17:15:52.347Z] =================================================================================================================== 00:16:10.049 [2024-12-06T17:15:52.347Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83182' 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83182 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83182 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 82542 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82542 ']' 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82542 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82542 00:16:10.049 killing process with pid 82542 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82542' 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82542 00:16:10.049 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82542 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.88WQ6ZvsQP 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.88WQ6ZvsQP 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83237 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83237 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83237 ']' 00:16:10.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.307 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.307 [2024-12-06 17:15:52.550531] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:10.307 [2024-12-06 17:15:52.550811] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.564 [2024-12-06 17:15:52.698745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.564 [2024-12-06 17:15:52.730748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.564 [2024-12-06 17:15:52.730808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.564 [2024-12-06 17:15:52.730819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.564 [2024-12-06 17:15:52.730828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.564 [2024-12-06 17:15:52.730835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.564 [2024-12-06 17:15:52.731147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.88WQ6ZvsQP 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.88WQ6ZvsQP 00:16:10.564 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:11.129 [2024-12-06 17:15:53.163560] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:11.129 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:11.386 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:11.644 [2024-12-06 17:15:53.699672] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:11.644 [2024-12-06 17:15:53.699909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:11.644 17:15:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:11.902 malloc0 00:16:11.902 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:12.159 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:12.417 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.88WQ6ZvsQP 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.88WQ6ZvsQP 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83337 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83337 /var/tmp/bdevperf.sock 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83337 ']' 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.982 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:12.982 [2024-12-06 17:15:55.023805] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:12.982 [2024-12-06 17:15:55.023902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83337 ] 00:16:12.982 [2024-12-06 17:15:55.166426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.982 [2024-12-06 17:15:55.215309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.238 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.238 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:13.238 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:13.494 17:15:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:13.751 [2024-12-06 17:15:55.940255] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:13.751 TLSTESTn1 00:16:13.751 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:14.007 Running I/O for 10 seconds... 00:16:15.868 3774.00 IOPS, 14.74 MiB/s [2024-12-06T17:15:59.547Z] 3576.00 IOPS, 13.97 MiB/s [2024-12-06T17:16:00.482Z] 3639.67 IOPS, 14.22 MiB/s [2024-12-06T17:16:01.417Z] 3638.00 IOPS, 14.21 MiB/s [2024-12-06T17:16:02.353Z] 3542.80 IOPS, 13.84 MiB/s [2024-12-06T17:16:03.288Z] 3536.50 IOPS, 13.81 MiB/s [2024-12-06T17:16:04.225Z] 3559.86 IOPS, 13.91 MiB/s [2024-12-06T17:16:05.167Z] 3554.50 IOPS, 13.88 MiB/s [2024-12-06T17:16:06.540Z] 3573.89 IOPS, 13.96 MiB/s [2024-12-06T17:16:06.540Z] 3591.10 IOPS, 14.03 MiB/s 00:16:24.242 Latency(us) 00:16:24.242 [2024-12-06T17:16:06.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.242 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:24.242 Verification LBA range: start 0x0 length 0x2000 00:16:24.242 TLSTESTn1 : 10.02 3595.20 14.04 0.00 0.00 35528.11 7357.91 33840.41 00:16:24.242 [2024-12-06T17:16:06.540Z] =================================================================================================================== 00:16:24.242 [2024-12-06T17:16:06.540Z] Total : 3595.20 14.04 0.00 0.00 35528.11 7357.91 33840.41 00:16:24.242 { 00:16:24.242 "results": [ 00:16:24.242 { 00:16:24.242 "job": "TLSTESTn1", 00:16:24.242 "core_mask": "0x4", 00:16:24.242 "workload": "verify", 00:16:24.242 "status": "finished", 00:16:24.242 "verify_range": { 00:16:24.242 "start": 0, 00:16:24.242 "length": 8192 00:16:24.242 }, 00:16:24.242 "queue_depth": 128, 00:16:24.242 "io_size": 4096, 00:16:24.242 "runtime": 10.023927, 00:16:24.242 "iops": 3595.1977702950153, 00:16:24.242 "mibps": 14.043741290214903, 00:16:24.242 "io_failed": 0, 00:16:24.242 "io_timeout": 0, 00:16:24.242 "avg_latency_us": 35528.11244211918, 00:16:24.242 "min_latency_us": 7357.905454545455, 00:16:24.242 "max_latency_us": 33840.40727272727 00:16:24.242 } 00:16:24.242 ], 00:16:24.242 "core_count": 1 00:16:24.242 } 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83337 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83337 ']' 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83337 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83337 00:16:24.242 killing process with pid 83337 00:16:24.242 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.242 00:16:24.242 Latency(us) 00:16:24.242 [2024-12-06T17:16:06.540Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.242 [2024-12-06T17:16:06.540Z] =================================================================================================================== 00:16:24.242 [2024-12-06T17:16:06.540Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83337' 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83337 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83337 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.88WQ6ZvsQP 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.88WQ6ZvsQP 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.88WQ6ZvsQP 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:24.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.88WQ6ZvsQP 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.88WQ6ZvsQP 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83479 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83479 /var/tmp/bdevperf.sock 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83479 ']' 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.242 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.242 [2024-12-06 17:16:06.419724] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:24.242 [2024-12-06 17:16:06.420064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83479 ] 00:16:24.542 [2024-12-06 17:16:06.559178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.542 [2024-12-06 17:16:06.592280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.542 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.542 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:24.542 17:16:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:24.801 [2024-12-06 17:16:06.990665] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.88WQ6ZvsQP': 0100666 00:16:24.801 [2024-12-06 17:16:06.990716] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:24.801 2024/12/06 17:16:06 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.88WQ6ZvsQP], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:24.801 request: 00:16:24.801 { 00:16:24.801 "method": "keyring_file_add_key", 00:16:24.801 "params": { 00:16:24.801 "name": "key0", 00:16:24.801 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:24.801 } 00:16:24.801 } 00:16:24.801 Got JSON-RPC error response 00:16:24.801 GoRPCClient: error on JSON-RPC call 00:16:24.801 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:25.060 [2024-12-06 17:16:07.266818] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:25.060 [2024-12-06 17:16:07.266883] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:25.060 2024/12/06 17:16:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:16:25.060 request: 00:16:25.060 { 00:16:25.060 "method": "bdev_nvme_attach_controller", 00:16:25.060 "params": { 00:16:25.060 "name": "TLSTEST", 00:16:25.060 "trtype": "tcp", 00:16:25.060 "traddr": "10.0.0.3", 00:16:25.060 "adrfam": "ipv4", 00:16:25.060 "trsvcid": "4420", 00:16:25.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:25.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:25.061 "prchk_reftag": false, 00:16:25.061 "prchk_guard": false, 00:16:25.061 "hdgst": false, 00:16:25.061 "ddgst": false, 00:16:25.061 "psk": "key0", 00:16:25.061 "allow_unrecognized_csi": false 00:16:25.061 } 00:16:25.061 } 00:16:25.061 Got JSON-RPC error response 00:16:25.061 GoRPCClient: error on JSON-RPC call 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83479 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83479 ']' 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83479 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83479 00:16:25.061 killing process with pid 83479 00:16:25.061 Received shutdown signal, test time was about 10.000000 seconds 00:16:25.061 00:16:25.061 Latency(us) 00:16:25.061 [2024-12-06T17:16:07.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:25.061 [2024-12-06T17:16:07.359Z] =================================================================================================================== 00:16:25.061 [2024-12-06T17:16:07.359Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83479' 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83479 00:16:25.061 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83479 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83237 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83237 ']' 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83237 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83237 00:16:25.319 killing process with pid 83237 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83237' 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83237 00:16:25.319 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83237 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83523 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83523 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83523 ']' 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.578 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.578 [2024-12-06 17:16:07.732521] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:25.578 [2024-12-06 17:16:07.733663] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.837 [2024-12-06 17:16:07.883519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.837 [2024-12-06 17:16:07.915451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.837 [2024-12-06 17:16:07.915520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.837 [2024-12-06 17:16:07.915531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.837 [2024-12-06 17:16:07.915540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.837 [2024-12-06 17:16:07.915547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.837 [2024-12-06 17:16:07.915878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.837 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.88WQ6ZvsQP 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.88WQ6ZvsQP 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.88WQ6ZvsQP 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.88WQ6ZvsQP 00:16:25.837 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:26.094 [2024-12-06 17:16:08.366580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.094 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:26.659 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:26.659 [2024-12-06 17:16:08.930711] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:26.659 [2024-12-06 17:16:08.930960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:26.917 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:27.175 malloc0 00:16:27.175 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:27.433 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:27.691 [2024-12-06 17:16:09.949773] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.88WQ6ZvsQP': 0100666 00:16:27.691 [2024-12-06 17:16:09.949830] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:27.691 2024/12/06 17:16:09 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.88WQ6ZvsQP], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:16:27.691 request: 00:16:27.691 { 00:16:27.691 "method": "keyring_file_add_key", 00:16:27.691 "params": { 00:16:27.691 "name": "key0", 00:16:27.691 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:27.691 } 00:16:27.691 } 00:16:27.691 Got JSON-RPC error response 00:16:27.691 GoRPCClient: error on JSON-RPC call 00:16:27.691 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:27.948 [2024-12-06 17:16:10.233872] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:27.948 [2024-12-06 17:16:10.233957] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:27.948 2024/12/06 17:16:10 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:16:27.948 request: 00:16:27.948 { 00:16:27.948 "method": "nvmf_subsystem_add_host", 00:16:27.948 "params": { 00:16:27.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.948 "host": "nqn.2016-06.io.spdk:host1", 00:16:27.948 "psk": "key0" 00:16:27.948 } 00:16:27.948 } 00:16:27.948 Got JSON-RPC error response 00:16:27.948 GoRPCClient: error on JSON-RPC call 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83523 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83523 ']' 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83523 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83523 00:16:28.207 killing process with pid 83523 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83523' 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83523 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83523 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.88WQ6ZvsQP 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83638 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83638 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83638 ']' 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:28.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:28.207 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.465 [2024-12-06 17:16:10.523435] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:28.465 [2024-12-06 17:16:10.523574] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.465 [2024-12-06 17:16:10.677072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:28.465 [2024-12-06 17:16:10.722783] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.465 [2024-12-06 17:16:10.722843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.465 [2024-12-06 17:16:10.722859] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.465 [2024-12-06 17:16:10.722871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.465 [2024-12-06 17:16:10.722881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.465 [2024-12-06 17:16:10.723253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.88WQ6ZvsQP 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.88WQ6ZvsQP 00:16:28.765 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:29.023 [2024-12-06 17:16:11.111680] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.023 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:29.281 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:29.538 [2024-12-06 17:16:11.759821] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:29.538 [2024-12-06 17:16:11.760079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:29.538 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:29.795 malloc0 00:16:29.795 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:30.361 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:30.361 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83740 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83740 /var/tmp/bdevperf.sock 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83740 ']' 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.926 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.926 [2024-12-06 17:16:13.008874] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:30.927 [2024-12-06 17:16:13.008978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83740 ] 00:16:30.927 [2024-12-06 17:16:13.202154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.186 [2024-12-06 17:16:13.243520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.186 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.186 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:31.186 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:31.444 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:31.735 [2024-12-06 17:16:13.928535] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:31.735 TLSTESTn1 00:16:31.735 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:16:32.301 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:32.301 "subsystems": [ 00:16:32.301 { 00:16:32.301 "subsystem": "keyring", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "keyring_file_add_key", 00:16:32.301 "params": { 00:16:32.301 "name": "key0", 00:16:32.301 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "iobuf", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "iobuf_set_options", 00:16:32.301 "params": { 00:16:32.301 "enable_numa": false, 00:16:32.301 "large_bufsize": 135168, 00:16:32.301 "large_pool_count": 1024, 00:16:32.301 "small_bufsize": 8192, 00:16:32.301 "small_pool_count": 8192 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "sock", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "sock_set_default_impl", 00:16:32.301 "params": { 00:16:32.301 "impl_name": "posix" 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "sock_impl_set_options", 00:16:32.301 "params": { 00:16:32.301 "enable_ktls": false, 00:16:32.301 "enable_placement_id": 0, 00:16:32.301 "enable_quickack": false, 00:16:32.301 "enable_recv_pipe": true, 00:16:32.301 "enable_zerocopy_send_client": false, 00:16:32.301 "enable_zerocopy_send_server": true, 00:16:32.301 "impl_name": "ssl", 00:16:32.301 "recv_buf_size": 4096, 00:16:32.301 "send_buf_size": 4096, 00:16:32.301 "tls_version": 0, 00:16:32.301 "zerocopy_threshold": 0 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "sock_impl_set_options", 00:16:32.301 "params": { 00:16:32.301 "enable_ktls": false, 00:16:32.301 "enable_placement_id": 0, 00:16:32.301 "enable_quickack": false, 00:16:32.301 "enable_recv_pipe": true, 00:16:32.301 "enable_zerocopy_send_client": false, 00:16:32.301 "enable_zerocopy_send_server": true, 00:16:32.301 "impl_name": "posix", 00:16:32.301 "recv_buf_size": 2097152, 00:16:32.301 "send_buf_size": 2097152, 00:16:32.301 "tls_version": 0, 00:16:32.301 "zerocopy_threshold": 0 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "vmd", 00:16:32.301 "config": [] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "accel", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "accel_set_options", 00:16:32.301 "params": { 00:16:32.301 "buf_count": 2048, 00:16:32.301 "large_cache_size": 16, 00:16:32.301 "sequence_count": 2048, 00:16:32.301 "small_cache_size": 128, 00:16:32.301 "task_count": 2048 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "bdev", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "bdev_set_options", 00:16:32.301 "params": { 00:16:32.301 "bdev_auto_examine": true, 00:16:32.301 "bdev_io_cache_size": 256, 00:16:32.301 "bdev_io_pool_size": 65535, 00:16:32.301 "iobuf_large_cache_size": 16, 00:16:32.301 "iobuf_small_cache_size": 128 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_raid_set_options", 00:16:32.301 "params": { 00:16:32.301 "process_max_bandwidth_mb_sec": 0, 00:16:32.301 "process_window_size_kb": 1024 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_iscsi_set_options", 00:16:32.301 "params": { 00:16:32.301 "timeout_sec": 30 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_nvme_set_options", 00:16:32.301 "params": { 00:16:32.301 "action_on_timeout": "none", 00:16:32.301 "allow_accel_sequence": false, 00:16:32.301 "arbitration_burst": 0, 00:16:32.301 "bdev_retry_count": 3, 00:16:32.302 "ctrlr_loss_timeout_sec": 0, 00:16:32.302 "delay_cmd_submit": true, 00:16:32.302 "dhchap_dhgroups": [ 00:16:32.302 "null", 00:16:32.302 "ffdhe2048", 00:16:32.302 "ffdhe3072", 00:16:32.302 "ffdhe4096", 00:16:32.302 "ffdhe6144", 00:16:32.302 "ffdhe8192" 00:16:32.302 ], 00:16:32.302 "dhchap_digests": [ 00:16:32.302 "sha256", 00:16:32.302 "sha384", 00:16:32.302 "sha512" 00:16:32.302 ], 00:16:32.302 "disable_auto_failback": false, 00:16:32.302 "fast_io_fail_timeout_sec": 0, 00:16:32.302 "generate_uuids": false, 00:16:32.302 "high_priority_weight": 0, 00:16:32.302 "io_path_stat": false, 00:16:32.302 "io_queue_requests": 0, 00:16:32.302 "keep_alive_timeout_ms": 10000, 00:16:32.302 "low_priority_weight": 0, 00:16:32.302 "medium_priority_weight": 0, 00:16:32.302 "nvme_adminq_poll_period_us": 10000, 00:16:32.302 "nvme_error_stat": false, 00:16:32.302 "nvme_ioq_poll_period_us": 0, 00:16:32.302 "rdma_cm_event_timeout_ms": 0, 00:16:32.302 "rdma_max_cq_size": 0, 00:16:32.302 "rdma_srq_size": 0, 00:16:32.302 "reconnect_delay_sec": 0, 00:16:32.302 "timeout_admin_us": 0, 00:16:32.302 "timeout_us": 0, 00:16:32.302 "transport_ack_timeout": 0, 00:16:32.302 "transport_retry_count": 4, 00:16:32.302 "transport_tos": 0 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "bdev_nvme_set_hotplug", 00:16:32.302 "params": { 00:16:32.302 "enable": false, 00:16:32.302 "period_us": 100000 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "bdev_malloc_create", 00:16:32.302 "params": { 00:16:32.302 "block_size": 4096, 00:16:32.302 "dif_is_head_of_md": false, 00:16:32.302 "dif_pi_format": 0, 00:16:32.302 "dif_type": 0, 00:16:32.302 "md_size": 0, 00:16:32.302 "name": "malloc0", 00:16:32.302 "num_blocks": 8192, 00:16:32.302 "optimal_io_boundary": 0, 00:16:32.302 "physical_block_size": 4096, 00:16:32.302 "uuid": "ee917af2-fbdc-4b58-a448-978bfa743bcc" 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "bdev_wait_for_examine" 00:16:32.302 } 00:16:32.302 ] 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "subsystem": "nbd", 00:16:32.302 "config": [] 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "subsystem": "scheduler", 00:16:32.302 "config": [ 00:16:32.302 { 00:16:32.302 "method": "framework_set_scheduler", 00:16:32.302 "params": { 00:16:32.302 "name": "static" 00:16:32.302 } 00:16:32.302 } 00:16:32.302 ] 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "subsystem": "nvmf", 00:16:32.302 "config": [ 00:16:32.302 { 00:16:32.302 "method": "nvmf_set_config", 00:16:32.302 "params": { 00:16:32.302 "admin_cmd_passthru": { 00:16:32.302 "identify_ctrlr": false 00:16:32.302 }, 00:16:32.302 "dhchap_dhgroups": [ 00:16:32.302 "null", 00:16:32.302 "ffdhe2048", 00:16:32.302 "ffdhe3072", 00:16:32.302 "ffdhe4096", 00:16:32.302 "ffdhe6144", 00:16:32.302 "ffdhe8192" 00:16:32.302 ], 00:16:32.302 "dhchap_digests": [ 00:16:32.302 "sha256", 00:16:32.302 "sha384", 00:16:32.302 "sha512" 00:16:32.302 ], 00:16:32.302 "discovery_filter": "match_any" 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_set_max_subsystems", 00:16:32.302 "params": { 00:16:32.302 "max_subsystems": 1024 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_set_crdt", 00:16:32.302 "params": { 00:16:32.302 "crdt1": 0, 00:16:32.302 "crdt2": 0, 00:16:32.302 "crdt3": 0 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_create_transport", 00:16:32.302 "params": { 00:16:32.302 "abort_timeout_sec": 1, 00:16:32.302 "ack_timeout": 0, 00:16:32.302 "buf_cache_size": 4294967295, 00:16:32.302 "c2h_success": false, 00:16:32.302 "data_wr_pool_size": 0, 00:16:32.302 "dif_insert_or_strip": false, 00:16:32.302 "in_capsule_data_size": 4096, 00:16:32.302 "io_unit_size": 131072, 00:16:32.302 "max_aq_depth": 128, 00:16:32.302 "max_io_qpairs_per_ctrlr": 127, 00:16:32.302 "max_io_size": 131072, 00:16:32.302 "max_queue_depth": 128, 00:16:32.302 "num_shared_buffers": 511, 00:16:32.302 "sock_priority": 0, 00:16:32.302 "trtype": "TCP", 00:16:32.302 "zcopy": false 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_create_subsystem", 00:16:32.302 "params": { 00:16:32.302 "allow_any_host": false, 00:16:32.302 "ana_reporting": false, 00:16:32.302 "max_cntlid": 65519, 00:16:32.302 "max_namespaces": 10, 00:16:32.302 "min_cntlid": 1, 00:16:32.302 "model_number": "SPDK bdev Controller", 00:16:32.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.302 "serial_number": "SPDK00000000000001" 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_subsystem_add_host", 00:16:32.302 "params": { 00:16:32.302 "host": "nqn.2016-06.io.spdk:host1", 00:16:32.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.302 "psk": "key0" 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_subsystem_add_ns", 00:16:32.302 "params": { 00:16:32.302 "namespace": { 00:16:32.302 "bdev_name": "malloc0", 00:16:32.302 "nguid": "EE917AF2FBDC4B58A448978BFA743BCC", 00:16:32.302 "no_auto_visible": false, 00:16:32.302 "nsid": 1, 00:16:32.302 "uuid": "ee917af2-fbdc-4b58-a448-978bfa743bcc" 00:16:32.302 }, 00:16:32.302 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_subsystem_add_listener", 00:16:32.302 "params": { 00:16:32.302 "listen_address": { 00:16:32.302 "adrfam": "IPv4", 00:16:32.302 "traddr": "10.0.0.3", 00:16:32.302 "trsvcid": "4420", 00:16:32.302 "trtype": "TCP" 00:16:32.302 }, 00:16:32.302 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.302 "secure_channel": true 00:16:32.302 } 00:16:32.302 } 00:16:32.302 ] 00:16:32.302 } 00:16:32.302 ] 00:16:32.302 }' 00:16:32.302 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:32.560 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:32.560 "subsystems": [ 00:16:32.560 { 00:16:32.560 "subsystem": "keyring", 00:16:32.560 "config": [ 00:16:32.560 { 00:16:32.560 "method": "keyring_file_add_key", 00:16:32.560 "params": { 00:16:32.560 "name": "key0", 00:16:32.560 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:32.560 } 00:16:32.560 } 00:16:32.560 ] 00:16:32.560 }, 00:16:32.560 { 00:16:32.560 "subsystem": "iobuf", 00:16:32.560 "config": [ 00:16:32.560 { 00:16:32.560 "method": "iobuf_set_options", 00:16:32.560 "params": { 00:16:32.560 "enable_numa": false, 00:16:32.560 "large_bufsize": 135168, 00:16:32.560 "large_pool_count": 1024, 00:16:32.560 "small_bufsize": 8192, 00:16:32.560 "small_pool_count": 8192 00:16:32.560 } 00:16:32.560 } 00:16:32.560 ] 00:16:32.560 }, 00:16:32.560 { 00:16:32.560 "subsystem": "sock", 00:16:32.560 "config": [ 00:16:32.560 { 00:16:32.561 "method": "sock_set_default_impl", 00:16:32.561 "params": { 00:16:32.561 "impl_name": "posix" 00:16:32.561 } 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "method": "sock_impl_set_options", 00:16:32.561 "params": { 00:16:32.561 "enable_ktls": false, 00:16:32.561 "enable_placement_id": 0, 00:16:32.561 "enable_quickack": false, 00:16:32.561 "enable_recv_pipe": true, 00:16:32.561 "enable_zerocopy_send_client": false, 00:16:32.561 "enable_zerocopy_send_server": true, 00:16:32.561 "impl_name": "ssl", 00:16:32.561 "recv_buf_size": 4096, 00:16:32.561 "send_buf_size": 4096, 00:16:32.561 "tls_version": 0, 00:16:32.561 "zerocopy_threshold": 0 00:16:32.561 } 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "method": "sock_impl_set_options", 00:16:32.561 "params": { 00:16:32.561 "enable_ktls": false, 00:16:32.561 "enable_placement_id": 0, 00:16:32.561 "enable_quickack": false, 00:16:32.561 "enable_recv_pipe": true, 00:16:32.561 "enable_zerocopy_send_client": false, 00:16:32.561 "enable_zerocopy_send_server": true, 00:16:32.561 "impl_name": "posix", 00:16:32.561 "recv_buf_size": 2097152, 00:16:32.561 "send_buf_size": 2097152, 00:16:32.561 "tls_version": 0, 00:16:32.561 "zerocopy_threshold": 0 00:16:32.561 } 00:16:32.561 } 00:16:32.561 ] 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "subsystem": "vmd", 00:16:32.561 "config": [] 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "subsystem": "accel", 00:16:32.561 "config": [ 00:16:32.561 { 00:16:32.561 "method": "accel_set_options", 00:16:32.561 "params": { 00:16:32.561 "buf_count": 2048, 00:16:32.561 "large_cache_size": 16, 00:16:32.561 "sequence_count": 2048, 00:16:32.561 "small_cache_size": 128, 00:16:32.561 "task_count": 2048 00:16:32.561 } 00:16:32.561 } 00:16:32.561 ] 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "subsystem": "bdev", 00:16:32.561 "config": [ 00:16:32.561 { 00:16:32.561 "method": "bdev_set_options", 00:16:32.561 "params": { 00:16:32.561 "bdev_auto_examine": true, 00:16:32.561 "bdev_io_cache_size": 256, 00:16:32.561 "bdev_io_pool_size": 65535, 00:16:32.561 "iobuf_large_cache_size": 16, 00:16:32.561 "iobuf_small_cache_size": 128 00:16:32.561 } 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "method": "bdev_raid_set_options", 00:16:32.561 "params": { 00:16:32.561 "process_max_bandwidth_mb_sec": 0, 00:16:32.561 "process_window_size_kb": 1024 00:16:32.561 } 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "method": "bdev_iscsi_set_options", 00:16:32.561 "params": { 00:16:32.561 "timeout_sec": 30 00:16:32.561 } 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "method": "bdev_nvme_set_options", 00:16:32.561 "params": { 00:16:32.561 "action_on_timeout": "none", 00:16:32.561 "allow_accel_sequence": false, 00:16:32.561 "arbitration_burst": 0, 00:16:32.561 "bdev_retry_count": 3, 00:16:32.561 "ctrlr_loss_timeout_sec": 0, 00:16:32.561 "delay_cmd_submit": true, 00:16:32.561 "dhchap_dhgroups": [ 00:16:32.561 "null", 00:16:32.561 "ffdhe2048", 00:16:32.561 "ffdhe3072", 00:16:32.561 "ffdhe4096", 00:16:32.561 "ffdhe6144", 00:16:32.561 "ffdhe8192" 00:16:32.561 ], 00:16:32.561 "dhchap_digests": [ 00:16:32.561 "sha256", 00:16:32.561 "sha384", 00:16:32.561 "sha512" 00:16:32.561 ], 00:16:32.561 "disable_auto_failback": false, 00:16:32.561 "fast_io_fail_timeout_sec": 0, 00:16:32.561 "generate_uuids": false, 00:16:32.561 "high_priority_weight": 0, 00:16:32.561 "io_path_stat": false, 00:16:32.561 "io_queue_requests": 512, 00:16:32.561 "keep_alive_timeout_ms": 10000, 00:16:32.561 "low_priority_weight": 0, 00:16:32.561 "medium_priority_weight": 0, 00:16:32.561 "nvme_adminq_poll_period_us": 10000, 00:16:32.561 "nvme_error_stat": false, 00:16:32.561 "nvme_ioq_poll_period_us": 0, 00:16:32.561 "rdma_cm_event_timeout_ms": 0, 00:16:32.561 "rdma_max_cq_size": 0, 00:16:32.561 "rdma_srq_size": 0, 00:16:32.561 "reconnect_delay_sec": 0, 00:16:32.561 "timeout_admin_us": 0, 00:16:32.561 "timeout_us": 0, 00:16:32.561 "transport_ack_timeout": 0, 00:16:32.561 "transport_retry_count": 4, 00:16:32.561 "transport_tos": 0 00:16:32.561 } 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "method": "bdev_nvme_attach_controller", 00:16:32.561 "params": { 00:16:32.561 "adrfam": "IPv4", 00:16:32.561 "ctrlr_loss_timeout_sec": 0, 00:16:32.561 "ddgst": false, 00:16:32.561 "fast_io_fail_timeout_sec": 0, 00:16:32.561 "hdgst": false, 00:16:32.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:32.561 "multipath": "multipath", 00:16:32.561 "name": "TLSTEST", 00:16:32.561 "prchk_guard": false, 00:16:32.561 "prchk_reftag": false, 00:16:32.561 "psk": "key0", 00:16:32.561 "reconnect_delay_sec": 0, 00:16:32.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:32.561 "traddr": "10.0.0.3", 00:16:32.561 "trsvcid": "4420", 00:16:32.561 "trtype": "TCP" 00:16:32.561 } 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "method": "bdev_nvme_set_hotplug", 00:16:32.561 "params": { 00:16:32.561 "enable": false, 00:16:32.561 "period_us": 100000 00:16:32.561 } 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "method": "bdev_wait_for_examine" 00:16:32.561 } 00:16:32.561 ] 00:16:32.561 }, 00:16:32.561 { 00:16:32.561 "subsystem": "nbd", 00:16:32.561 "config": [] 00:16:32.561 } 00:16:32.561 ] 00:16:32.561 }' 00:16:32.561 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83740 00:16:32.561 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83740 ']' 00:16:32.561 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83740 00:16:32.561 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:32.561 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.561 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83740 00:16:32.819 killing process with pid 83740 00:16:32.819 Received shutdown signal, test time was about 10.000000 seconds 00:16:32.819 00:16:32.819 Latency(us) 00:16:32.819 [2024-12-06T17:16:15.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.819 [2024-12-06T17:16:15.117Z] =================================================================================================================== 00:16:32.819 [2024-12-06T17:16:15.117Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.819 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:32.819 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:32.819 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83740' 00:16:32.819 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83740 00:16:32.819 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83740 00:16:32.819 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83638 00:16:32.819 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83638 ']' 00:16:32.819 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83638 00:16:32.819 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:32.819 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.819 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83638 00:16:32.820 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:32.820 killing process with pid 83638 00:16:32.820 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:32.820 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83638' 00:16:32.820 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83638 00:16:32.820 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83638 00:16:33.078 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:33.078 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.078 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.078 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.078 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:33.078 "subsystems": [ 00:16:33.078 { 00:16:33.078 "subsystem": "keyring", 00:16:33.078 "config": [ 00:16:33.078 { 00:16:33.078 "method": "keyring_file_add_key", 00:16:33.078 "params": { 00:16:33.078 "name": "key0", 00:16:33.078 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:33.078 } 00:16:33.078 } 00:16:33.078 ] 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "subsystem": "iobuf", 00:16:33.078 "config": [ 00:16:33.078 { 00:16:33.078 "method": "iobuf_set_options", 00:16:33.078 "params": { 00:16:33.078 "enable_numa": false, 00:16:33.078 "large_bufsize": 135168, 00:16:33.078 "large_pool_count": 1024, 00:16:33.078 "small_bufsize": 8192, 00:16:33.078 "small_pool_count": 8192 00:16:33.078 } 00:16:33.078 } 00:16:33.078 ] 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "subsystem": "sock", 00:16:33.078 "config": [ 00:16:33.078 { 00:16:33.078 "method": "sock_set_default_impl", 00:16:33.078 "params": { 00:16:33.078 "impl_name": "posix" 00:16:33.078 } 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "method": "sock_impl_set_options", 00:16:33.078 "params": { 00:16:33.078 "enable_ktls": false, 00:16:33.078 "enable_placement_id": 0, 00:16:33.078 "enable_quickack": false, 00:16:33.078 "enable_recv_pipe": true, 00:16:33.078 "enable_zerocopy_send_client": false, 00:16:33.078 "enable_zerocopy_send_server": true, 00:16:33.078 "impl_name": "ssl", 00:16:33.078 "recv_buf_size": 4096, 00:16:33.078 "send_buf_size": 4096, 00:16:33.078 "tls_version": 0, 00:16:33.078 "zerocopy_threshold": 0 00:16:33.078 } 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "method": "sock_impl_set_options", 00:16:33.078 "params": { 00:16:33.078 "enable_ktls": false, 00:16:33.078 "enable_placement_id": 0, 00:16:33.078 "enable_quickack": false, 00:16:33.078 "enable_recv_pipe": true, 00:16:33.078 "enable_zerocopy_send_client": false, 00:16:33.078 "enable_zerocopy_send_server": true, 00:16:33.078 "impl_name": "posix", 00:16:33.078 "recv_buf_size": 2097152, 00:16:33.078 "send_buf_size": 2097152, 00:16:33.078 "tls_version": 0, 00:16:33.078 "zerocopy_threshold": 0 00:16:33.078 } 00:16:33.078 } 00:16:33.078 ] 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "subsystem": "vmd", 00:16:33.078 "config": [] 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "subsystem": "accel", 00:16:33.078 "config": [ 00:16:33.078 { 00:16:33.078 "method": "accel_set_options", 00:16:33.078 "params": { 00:16:33.078 "buf_count": 2048, 00:16:33.078 "large_cache_size": 16, 00:16:33.078 "sequence_count": 2048, 00:16:33.078 "small_cache_size": 128, 00:16:33.078 "task_count": 2048 00:16:33.078 } 00:16:33.078 } 00:16:33.078 ] 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "subsystem": "bdev", 00:16:33.078 "config": [ 00:16:33.078 { 00:16:33.078 "method": "bdev_set_options", 00:16:33.078 "params": { 00:16:33.078 "bdev_auto_examine": true, 00:16:33.078 "bdev_io_cache_size": 256, 00:16:33.078 "bdev_io_pool_size": 65535, 00:16:33.078 "iobuf_large_cache_size": 16, 00:16:33.078 "iobuf_small_cache_size": 128 00:16:33.078 } 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "method": "bdev_raid_set_options", 00:16:33.078 "params": { 00:16:33.078 "process_max_bandwidth_mb_sec": 0, 00:16:33.078 "process_window_size_kb": 1024 00:16:33.078 } 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "method": "bdev_iscsi_set_options", 00:16:33.078 "params": { 00:16:33.078 "timeout_sec": 30 00:16:33.078 } 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "method": "bdev_nvme_set_options", 00:16:33.078 "params": { 00:16:33.078 "action_on_timeout": "none", 00:16:33.078 "allow_accel_sequence": false, 00:16:33.078 "arbitration_burst": 0, 00:16:33.078 "bdev_retry_count": 3, 00:16:33.078 "ctrlr_loss_timeout_sec": 0, 00:16:33.078 "delay_cmd_submit": true, 00:16:33.078 "dhchap_dhgroups": [ 00:16:33.078 "null", 00:16:33.078 "ffdhe2048", 00:16:33.078 "ffdhe3072", 00:16:33.078 "ffdhe4096", 00:16:33.078 "ffdhe6144", 00:16:33.078 "ffdhe8192" 00:16:33.078 ], 00:16:33.078 "dhchap_digests": [ 00:16:33.078 "sha256", 00:16:33.078 "sha384", 00:16:33.078 "sha512" 00:16:33.078 ], 00:16:33.078 "disable_auto_failback": false, 00:16:33.078 "fast_io_fail_timeout_sec": 0, 00:16:33.078 "generate_uuids": false, 00:16:33.078 "high_priority_weight": 0, 00:16:33.078 "io_path_stat": false, 00:16:33.078 "io_queue_requests": 0, 00:16:33.078 "keep_alive_timeout_ms": 10000, 00:16:33.078 "low_priority_weight": 0, 00:16:33.078 "medium_priority_weight": 0, 00:16:33.078 "nvme_adminq_poll_period_us": 10000, 00:16:33.078 "nvme_error_stat": false, 00:16:33.078 "nvme_ioq_poll_period_us": 0, 00:16:33.078 "rdma_cm_event_timeout_ms": 0, 00:16:33.078 "rdma_max_cq_size": 0, 00:16:33.078 "rdma_srq_size": 0, 00:16:33.078 "reconnect_delay_sec": 0, 00:16:33.078 "timeout_admin_us": 0, 00:16:33.078 "timeout_us": 0, 00:16:33.078 "transport_ack_timeout": 0, 00:16:33.078 "transport_retry_count": 4, 00:16:33.078 "transport_tos": 0 00:16:33.078 } 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "method": "bdev_nvme_set_hotplug", 00:16:33.078 "params": { 00:16:33.078 "enable": false, 00:16:33.078 "period_us": 100000 00:16:33.078 } 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "method": "bdev_malloc_create", 00:16:33.078 "params": { 00:16:33.078 "block_size": 4096, 00:16:33.078 "dif_is_head_of_md": false, 00:16:33.078 "dif_pi_format": 0, 00:16:33.078 "dif_type": 0, 00:16:33.078 "md_size": 0, 00:16:33.078 "name": "malloc0", 00:16:33.078 "num_blocks": 8192, 00:16:33.078 "optimal_io_boundary": 0, 00:16:33.078 "physical_block_size": 4096, 00:16:33.078 "uuid": "ee917af2-fbdc-4b58-a448-978bfa743bcc" 00:16:33.078 } 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "method": "bdev_wait_for_examine" 00:16:33.078 } 00:16:33.078 ] 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "subsystem": "nbd", 00:16:33.078 "config": [] 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "subsystem": "scheduler", 00:16:33.078 "config": [ 00:16:33.078 { 00:16:33.078 "method": "framework_set_scheduler", 00:16:33.078 "params": { 00:16:33.078 "name": "static" 00:16:33.078 } 00:16:33.078 } 00:16:33.078 ] 00:16:33.078 }, 00:16:33.078 { 00:16:33.078 "subsystem": "nvmf", 00:16:33.078 "config": [ 00:16:33.078 { 00:16:33.078 "method": "nvmf_set_config", 00:16:33.078 "params": { 00:16:33.078 "admin_cmd_passthru": { 00:16:33.078 "identify_ctrlr": false 00:16:33.078 }, 00:16:33.078 "dhchap_dhgroups": [ 00:16:33.078 "null", 00:16:33.078 "ffdhe2048", 00:16:33.078 "ffdhe3072", 00:16:33.078 "ffdhe4096", 00:16:33.078 "ffdhe6144", 00:16:33.078 "ffdhe8192" 00:16:33.078 ], 00:16:33.078 "dhchap_digests": [ 00:16:33.079 "sha256", 00:16:33.079 "sha384", 00:16:33.079 "sha512" 00:16:33.079 ], 00:16:33.079 "discovery_filter": "match_any" 00:16:33.079 } 00:16:33.079 }, 00:16:33.079 { 00:16:33.079 "method": "nvmf_set_max_subsystems", 00:16:33.079 "params": { 00:16:33.079 "max_subsystems": 1024 00:16:33.079 } 00:16:33.079 }, 00:16:33.079 { 00:16:33.079 "method": "nvmf_set_crdt", 00:16:33.079 "params": { 00:16:33.079 "crdt1": 0, 00:16:33.079 "crdt2": 0, 00:16:33.079 "crdt3": 0 00:16:33.079 } 00:16:33.079 }, 00:16:33.079 { 00:16:33.079 "method": "nvmf_create_transport", 00:16:33.079 "params": { 00:16:33.079 "abort_timeout_sec": 1, 00:16:33.079 "ack_timeout": 0, 00:16:33.079 "buf_cache_size": 4294967295, 00:16:33.079 "c2h_success": false, 00:16:33.079 "data_wr_pool_size": 0, 00:16:33.079 "dif_insert_or_strip": false, 00:16:33.079 "in_capsule_data_size": 4096, 00:16:33.079 "io_unit_size": 131072, 00:16:33.079 "max_aq_depth": 128, 00:16:33.079 "max_io_qpairs_per_ctrlr": 127, 00:16:33.079 "max_io_size": 131072, 00:16:33.079 "max_queue_depth": 128, 00:16:33.079 "num_shared_buffers": 511, 00:16:33.079 "sock_priority": 0, 00:16:33.079 "trtype": "TCP", 00:16:33.079 "zcopy": false 00:16:33.079 } 00:16:33.079 }, 00:16:33.079 { 00:16:33.079 "method": "nvmf_create_subsystem", 00:16:33.079 "params": { 00:16:33.079 "allow_any_host": false, 00:16:33.079 "ana_reporting": false, 00:16:33.079 "max_cntlid": 65519, 00:16:33.079 "max_namespaces": 10, 00:16:33.079 "min_cntlid": 1, 00:16:33.079 "model_number": "SPDK bdev Controller", 00:16:33.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.079 "serial_number": "SPDK00000000000001" 00:16:33.079 } 00:16:33.079 }, 00:16:33.079 { 00:16:33.079 "method": "nvmf_subsystem_add_host", 00:16:33.079 "params": { 00:16:33.079 "host": "nqn.2016-06.io.spdk:host1", 00:16:33.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.079 "psk": "key0" 00:16:33.079 } 00:16:33.079 }, 00:16:33.079 { 00:16:33.079 "method": "nvmf_subsystem_add_ns", 00:16:33.079 "params": { 00:16:33.079 "namespace": { 00:16:33.079 "bdev_name": "malloc0", 00:16:33.079 "nguid": "EE917AF2FBDC4B58A448978BFA743BCC", 00:16:33.079 "no_auto_visible": false, 00:16:33.079 "nsid": 1, 00:16:33.079 "uuid": "ee917af2-fbdc-4b58-a448-978bfa743bcc" 00:16:33.079 }, 00:16:33.079 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:33.079 } 00:16:33.079 }, 00:16:33.079 { 00:16:33.079 "method": "nvmf_subsystem_add_listener", 00:16:33.079 "params": { 00:16:33.079 "listen_address": { 00:16:33.079 "adrfam": "IPv4", 00:16:33.079 "traddr": "10.0.0.3", 00:16:33.079 "trsvcid": "4420", 00:16:33.079 "trtype": "TCP" 00:16:33.079 }, 00:16:33.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.079 "secure_channel": true 00:16:33.079 } 00:16:33.079 } 00:16:33.079 ] 00:16:33.079 } 00:16:33.079 ] 00:16:33.079 }' 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83809 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83809 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83809 ']' 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.079 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:33.079 [2024-12-06 17:16:15.281619] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:33.079 [2024-12-06 17:16:15.281762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.336 [2024-12-06 17:16:15.434141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.336 [2024-12-06 17:16:15.469077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.336 [2024-12-06 17:16:15.469145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.336 [2024-12-06 17:16:15.469160] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.336 [2024-12-06 17:16:15.469171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.336 [2024-12-06 17:16:15.469182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.336 [2024-12-06 17:16:15.469584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.594 [2024-12-06 17:16:15.666172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.594 [2024-12-06 17:16:15.698080] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:33.594 [2024-12-06 17:16:15.698383] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83856 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83856 /var/tmp/bdevperf.sock 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83856 ']' 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:34.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.529 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:34.529 "subsystems": [ 00:16:34.529 { 00:16:34.529 "subsystem": "keyring", 00:16:34.529 "config": [ 00:16:34.529 { 00:16:34.529 "method": "keyring_file_add_key", 00:16:34.529 "params": { 00:16:34.529 "name": "key0", 00:16:34.529 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:34.529 } 00:16:34.529 } 00:16:34.529 ] 00:16:34.529 }, 00:16:34.529 { 00:16:34.529 "subsystem": "iobuf", 00:16:34.529 "config": [ 00:16:34.529 { 00:16:34.529 "method": "iobuf_set_options", 00:16:34.529 "params": { 00:16:34.529 "enable_numa": false, 00:16:34.529 "large_bufsize": 135168, 00:16:34.529 "large_pool_count": 1024, 00:16:34.529 "small_bufsize": 8192, 00:16:34.530 "small_pool_count": 8192 00:16:34.530 } 00:16:34.530 } 00:16:34.530 ] 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "subsystem": "sock", 00:16:34.530 "config": [ 00:16:34.530 { 00:16:34.530 "method": "sock_set_default_impl", 00:16:34.530 "params": { 00:16:34.530 "impl_name": "posix" 00:16:34.530 } 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "method": "sock_impl_set_options", 00:16:34.530 "params": { 00:16:34.530 "enable_ktls": false, 00:16:34.530 "enable_placement_id": 0, 00:16:34.530 "enable_quickack": false, 00:16:34.530 "enable_recv_pipe": true, 00:16:34.530 "enable_zerocopy_send_client": false, 00:16:34.530 "enable_zerocopy_send_server": true, 00:16:34.530 "impl_name": "ssl", 00:16:34.530 "recv_buf_size": 4096, 00:16:34.530 "send_buf_size": 4096, 00:16:34.530 "tls_version": 0, 00:16:34.530 "zerocopy_threshold": 0 00:16:34.530 } 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "method": "sock_impl_set_options", 00:16:34.530 "params": { 00:16:34.530 "enable_ktls": false, 00:16:34.530 "enable_placement_id": 0, 00:16:34.530 "enable_quickack": false, 00:16:34.530 "enable_recv_pipe": true, 00:16:34.530 "enable_zerocopy_send_client": false, 00:16:34.530 "enable_zerocopy_send_server": true, 00:16:34.530 "impl_name": "posix", 00:16:34.530 "recv_buf_size": 2097152, 00:16:34.530 "send_buf_size": 2097152, 00:16:34.530 "tls_version": 0, 00:16:34.530 "zerocopy_threshold": 0 00:16:34.530 } 00:16:34.530 } 00:16:34.530 ] 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "subsystem": "vmd", 00:16:34.530 "config": [] 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "subsystem": "accel", 00:16:34.530 "config": [ 00:16:34.530 { 00:16:34.530 "method": "accel_set_options", 00:16:34.530 "params": { 00:16:34.530 "buf_count": 2048, 00:16:34.530 "large_cache_size": 16, 00:16:34.530 "sequence_count": 2048, 00:16:34.530 "small_cache_size": 128, 00:16:34.530 "task_count": 2048 00:16:34.530 } 00:16:34.530 } 00:16:34.530 ] 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "subsystem": "bdev", 00:16:34.530 "config": [ 00:16:34.530 { 00:16:34.530 "method": "bdev_set_options", 00:16:34.530 "params": { 00:16:34.530 "bdev_auto_examine": true, 00:16:34.530 "bdev_io_cache_size": 256, 00:16:34.530 "bdev_io_pool_size": 65535, 00:16:34.530 "iobuf_large_cache_size": 16, 00:16:34.530 "iobuf_small_cache_size": 128 00:16:34.530 } 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "method": "bdev_raid_set_options", 00:16:34.530 "params": { 00:16:34.530 "process_max_bandwidth_mb_sec": 0, 00:16:34.530 "process_window_size_kb": 1024 00:16:34.530 } 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "method": "bdev_iscsi_set_options", 00:16:34.530 "params": { 00:16:34.530 "timeout_sec": 30 00:16:34.530 } 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "method": "bdev_nvme_set_options", 00:16:34.530 "params": { 00:16:34.530 "action_on_timeout": "none", 00:16:34.530 "allow_accel_sequence": false, 00:16:34.530 "arbitration_burst": 0, 00:16:34.530 "bdev_retry_count": 3, 00:16:34.530 "ctrlr_loss_timeout_sec": 0, 00:16:34.530 "delay_cmd_submit": true, 00:16:34.530 "dhchap_dhgroups": [ 00:16:34.530 "null", 00:16:34.530 "ffdhe2048", 00:16:34.530 "ffdhe3072", 00:16:34.530 "ffdhe4096", 00:16:34.530 "ffdhe6144", 00:16:34.530 "ffdhe8192" 00:16:34.530 ], 00:16:34.530 "dhchap_digests": [ 00:16:34.530 "sha256", 00:16:34.530 "sha384", 00:16:34.530 "sha512" 00:16:34.530 ], 00:16:34.530 "disable_auto_failback": false, 00:16:34.530 "fast_io_fail_timeout_sec": 0, 00:16:34.530 "generate_uuids": false, 00:16:34.530 "high_priority_weight": 0, 00:16:34.530 "io_path_stat": false, 00:16:34.530 "io_queue_requests": 512, 00:16:34.530 "keep_alive_timeout_ms": 10000, 00:16:34.530 "low_priority_weight": 0, 00:16:34.530 "medium_priority_weight": 0, 00:16:34.530 "nvme_adminq_poll_period_us": 10000, 00:16:34.530 "nvme_error_stat": false, 00:16:34.530 "nvme_ioq_poll_period_us": 0, 00:16:34.530 "rdma_cm_event_timeout_ms": 0, 00:16:34.530 "rdma_max_cq_size": 0, 00:16:34.530 "rdma_srq_size": 0, 00:16:34.530 "reconnect_delay_sec": 0, 00:16:34.530 "timeout_admin_us": 0, 00:16:34.530 "timeout_us": 0, 00:16:34.530 "transport_ack_timeout": 0, 00:16:34.530 "transport_retry_count": 4, 00:16:34.530 "transport_tos": 0 00:16:34.530 } 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "method": "bdev_nvme_attach_controller", 00:16:34.530 "params": { 00:16:34.530 "adrfam": "IPv4", 00:16:34.530 "ctrlr_loss_timeout_sec": 0, 00:16:34.530 "ddgst": false, 00:16:34.530 "fast_io_fail_timeout_sec": 0, 00:16:34.530 "hdgst": false, 00:16:34.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:34.530 "multipath": "multipath", 00:16:34.530 "name": "TLSTEST", 00:16:34.530 "prchk_guard": false, 00:16:34.530 "prchk_reftag": false, 00:16:34.530 "psk": "key0", 00:16:34.530 "reconnect_delay_sec": 0, 00:16:34.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:34.530 "traddr": "10.0.0.3", 00:16:34.530 "trsvcid": "4420", 00:16:34.530 "trtype": "TCP" 00:16:34.530 } 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "method": "bdev_nvme_set_hotplug", 00:16:34.530 "params": { 00:16:34.530 "enable": false, 00:16:34.530 "period_us": 100000 00:16:34.530 } 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "method": "bdev_wait_for_examine" 00:16:34.530 } 00:16:34.530 ] 00:16:34.530 }, 00:16:34.530 { 00:16:34.530 "subsystem": "nbd", 00:16:34.530 "config": [] 00:16:34.530 } 00:16:34.530 ] 00:16:34.530 }' 00:16:34.530 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:34.530 [2024-12-06 17:16:16.568017] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:34.530 [2024-12-06 17:16:16.568112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83856 ] 00:16:34.530 [2024-12-06 17:16:16.710918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.530 [2024-12-06 17:16:16.747706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.789 [2024-12-06 17:16:16.887226] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:35.723 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.723 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:35.723 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:35.723 Running I/O for 10 seconds... 00:16:37.592 3480.00 IOPS, 13.59 MiB/s [2024-12-06T17:16:20.824Z] 3680.00 IOPS, 14.38 MiB/s [2024-12-06T17:16:22.194Z] 3788.67 IOPS, 14.80 MiB/s [2024-12-06T17:16:23.126Z] 3811.25 IOPS, 14.89 MiB/s [2024-12-06T17:16:24.057Z] 3799.20 IOPS, 14.84 MiB/s [2024-12-06T17:16:24.988Z] 3791.83 IOPS, 14.81 MiB/s [2024-12-06T17:16:25.919Z] 3802.00 IOPS, 14.85 MiB/s [2024-12-06T17:16:26.853Z] 3809.00 IOPS, 14.88 MiB/s [2024-12-06T17:16:28.228Z] 3824.67 IOPS, 14.94 MiB/s [2024-12-06T17:16:28.228Z] 3840.70 IOPS, 15.00 MiB/s 00:16:45.930 Latency(us) 00:16:45.930 [2024-12-06T17:16:28.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.930 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:45.930 Verification LBA range: start 0x0 length 0x2000 00:16:45.930 TLSTESTn1 : 10.02 3847.50 15.03 0.00 0.00 33208.78 5093.93 26929.34 00:16:45.930 [2024-12-06T17:16:28.228Z] =================================================================================================================== 00:16:45.930 [2024-12-06T17:16:28.228Z] Total : 3847.50 15.03 0.00 0.00 33208.78 5093.93 26929.34 00:16:45.930 { 00:16:45.930 "results": [ 00:16:45.930 { 00:16:45.930 "job": "TLSTESTn1", 00:16:45.930 "core_mask": "0x4", 00:16:45.930 "workload": "verify", 00:16:45.930 "status": "finished", 00:16:45.930 "verify_range": { 00:16:45.930 "start": 0, 00:16:45.930 "length": 8192 00:16:45.930 }, 00:16:45.930 "queue_depth": 128, 00:16:45.930 "io_size": 4096, 00:16:45.930 "runtime": 10.015603, 00:16:45.930 "iops": 3847.496750819696, 00:16:45.930 "mibps": 15.029284182889437, 00:16:45.930 "io_failed": 0, 00:16:45.930 "io_timeout": 0, 00:16:45.930 "avg_latency_us": 33208.78346155208, 00:16:45.930 "min_latency_us": 5093.9345454545455, 00:16:45.930 "max_latency_us": 26929.33818181818 00:16:45.930 } 00:16:45.930 ], 00:16:45.930 "core_count": 1 00:16:45.930 } 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83856 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83856 ']' 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83856 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83856 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83856' 00:16:45.930 killing process with pid 83856 00:16:45.930 Received shutdown signal, test time was about 10.000000 seconds 00:16:45.930 00:16:45.930 Latency(us) 00:16:45.930 [2024-12-06T17:16:28.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:45.930 [2024-12-06T17:16:28.228Z] =================================================================================================================== 00:16:45.930 [2024-12-06T17:16:28.228Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83856 00:16:45.930 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83856 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83809 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83809 ']' 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83809 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83809 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:45.930 killing process with pid 83809 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83809' 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83809 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83809 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84007 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84007 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84007 ']' 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:45.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:45.930 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.188 [2024-12-06 17:16:28.275445] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:46.188 [2024-12-06 17:16:28.275575] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.188 [2024-12-06 17:16:28.422050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.188 [2024-12-06 17:16:28.454530] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.188 [2024-12-06 17:16:28.454768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.188 [2024-12-06 17:16:28.454858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.188 [2024-12-06 17:16:28.454939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.188 [2024-12-06 17:16:28.455017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.188 [2024-12-06 17:16:28.455419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.88WQ6ZvsQP 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.88WQ6ZvsQP 00:16:46.446 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:46.704 [2024-12-06 17:16:28.830061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.705 17:16:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:46.963 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:47.221 [2024-12-06 17:16:29.398209] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:47.221 [2024-12-06 17:16:29.399087] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.221 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:47.479 malloc0 00:16:47.479 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:48.073 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:48.331 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:48.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84107 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84107 /var/tmp/bdevperf.sock 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84107 ']' 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.589 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.589 [2024-12-06 17:16:30.750806] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:48.589 [2024-12-06 17:16:30.750941] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84107 ] 00:16:48.848 [2024-12-06 17:16:30.904850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.848 [2024-12-06 17:16:30.940242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.848 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.848 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:48.848 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:49.105 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:49.669 [2024-12-06 17:16:31.772420] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:49.670 nvme0n1 00:16:49.670 17:16:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:49.951 Running I/O for 1 seconds... 00:16:50.883 3768.00 IOPS, 14.72 MiB/s 00:16:50.883 Latency(us) 00:16:50.883 [2024-12-06T17:16:33.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.883 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:50.883 Verification LBA range: start 0x0 length 0x2000 00:16:50.883 nvme0n1 : 1.02 3830.27 14.96 0.00 0.00 33102.18 5808.87 39798.23 00:16:50.883 [2024-12-06T17:16:33.181Z] =================================================================================================================== 00:16:50.883 [2024-12-06T17:16:33.182Z] Total : 3830.27 14.96 0.00 0.00 33102.18 5808.87 39798.23 00:16:50.884 { 00:16:50.884 "results": [ 00:16:50.884 { 00:16:50.884 "job": "nvme0n1", 00:16:50.884 "core_mask": "0x2", 00:16:50.884 "workload": "verify", 00:16:50.884 "status": "finished", 00:16:50.884 "verify_range": { 00:16:50.884 "start": 0, 00:16:50.884 "length": 8192 00:16:50.884 }, 00:16:50.884 "queue_depth": 128, 00:16:50.884 "io_size": 4096, 00:16:50.884 "runtime": 1.017422, 00:16:50.884 "iops": 3830.269052566192, 00:16:50.884 "mibps": 14.961988486586687, 00:16:50.884 "io_failed": 0, 00:16:50.884 "io_timeout": 0, 00:16:50.884 "avg_latency_us": 33102.17749224345, 00:16:50.884 "min_latency_us": 5808.872727272727, 00:16:50.884 "max_latency_us": 39798.225454545456 00:16:50.884 } 00:16:50.884 ], 00:16:50.884 "core_count": 1 00:16:50.884 } 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84107 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84107 ']' 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84107 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84107 00:16:50.884 killing process with pid 84107 00:16:50.884 Received shutdown signal, test time was about 1.000000 seconds 00:16:50.884 00:16:50.884 Latency(us) 00:16:50.884 [2024-12-06T17:16:33.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.884 [2024-12-06T17:16:33.182Z] =================================================================================================================== 00:16:50.884 [2024-12-06T17:16:33.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84107' 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84107 00:16:50.884 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84107 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84007 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84007 ']' 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84007 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84007 00:16:51.142 killing process with pid 84007 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84007' 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84007 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84007 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84174 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84174 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84174 ']' 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:51.142 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.400 [2024-12-06 17:16:33.441971] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:51.400 [2024-12-06 17:16:33.442085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.400 [2024-12-06 17:16:33.590287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.400 [2024-12-06 17:16:33.621607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.400 [2024-12-06 17:16:33.621672] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.400 [2024-12-06 17:16:33.621684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.400 [2024-12-06 17:16:33.621693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.400 [2024-12-06 17:16:33.621700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.400 [2024-12-06 17:16:33.622010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.659 [2024-12-06 17:16:33.750589] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.659 malloc0 00:16:51.659 [2024-12-06 17:16:33.781086] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:51.659 [2024-12-06 17:16:33.781360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:51.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84205 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84205 /var/tmp/bdevperf.sock 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84205 ']' 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.659 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:51.659 [2024-12-06 17:16:33.881049] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:51.659 [2024-12-06 17:16:33.881404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84205 ] 00:16:51.918 [2024-12-06 17:16:34.027346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.918 [2024-12-06 17:16:34.060365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.918 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.918 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:51.918 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.88WQ6ZvsQP 00:16:52.484 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:52.484 [2024-12-06 17:16:34.757650] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:52.761 nvme0n1 00:16:52.761 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:52.761 Running I/O for 1 seconds... 00:16:53.694 3762.00 IOPS, 14.70 MiB/s 00:16:53.694 Latency(us) 00:16:53.694 [2024-12-06T17:16:35.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.694 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:53.694 Verification LBA range: start 0x0 length 0x2000 00:16:53.694 nvme0n1 : 1.02 3825.14 14.94 0.00 0.00 33146.12 5451.40 26929.34 00:16:53.694 [2024-12-06T17:16:35.992Z] =================================================================================================================== 00:16:53.694 [2024-12-06T17:16:35.992Z] Total : 3825.14 14.94 0.00 0.00 33146.12 5451.40 26929.34 00:16:53.694 { 00:16:53.694 "results": [ 00:16:53.694 { 00:16:53.694 "job": "nvme0n1", 00:16:53.694 "core_mask": "0x2", 00:16:53.694 "workload": "verify", 00:16:53.694 "status": "finished", 00:16:53.694 "verify_range": { 00:16:53.694 "start": 0, 00:16:53.694 "length": 8192 00:16:53.694 }, 00:16:53.694 "queue_depth": 128, 00:16:53.694 "io_size": 4096, 00:16:53.694 "runtime": 1.017218, 00:16:53.694 "iops": 3825.1387608162654, 00:16:53.694 "mibps": 14.941948284438537, 00:16:53.694 "io_failed": 0, 00:16:53.694 "io_timeout": 0, 00:16:53.694 "avg_latency_us": 33146.12469802107, 00:16:53.694 "min_latency_us": 5451.403636363636, 00:16:53.694 "max_latency_us": 26929.33818181818 00:16:53.694 } 00:16:53.694 ], 00:16:53.694 "core_count": 1 00:16:53.694 } 00:16:53.951 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:53.951 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.951 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.951 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.951 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:53.951 "subsystems": [ 00:16:53.951 { 00:16:53.951 "subsystem": "keyring", 00:16:53.951 "config": [ 00:16:53.951 { 00:16:53.951 "method": "keyring_file_add_key", 00:16:53.951 "params": { 00:16:53.951 "name": "key0", 00:16:53.951 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:53.951 } 00:16:53.951 } 00:16:53.951 ] 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "subsystem": "iobuf", 00:16:53.951 "config": [ 00:16:53.951 { 00:16:53.951 "method": "iobuf_set_options", 00:16:53.951 "params": { 00:16:53.951 "enable_numa": false, 00:16:53.951 "large_bufsize": 135168, 00:16:53.951 "large_pool_count": 1024, 00:16:53.951 "small_bufsize": 8192, 00:16:53.951 "small_pool_count": 8192 00:16:53.951 } 00:16:53.951 } 00:16:53.951 ] 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "subsystem": "sock", 00:16:53.951 "config": [ 00:16:53.951 { 00:16:53.951 "method": "sock_set_default_impl", 00:16:53.951 "params": { 00:16:53.951 "impl_name": "posix" 00:16:53.951 } 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "method": "sock_impl_set_options", 00:16:53.951 "params": { 00:16:53.951 "enable_ktls": false, 00:16:53.951 "enable_placement_id": 0, 00:16:53.951 "enable_quickack": false, 00:16:53.951 "enable_recv_pipe": true, 00:16:53.951 "enable_zerocopy_send_client": false, 00:16:53.951 "enable_zerocopy_send_server": true, 00:16:53.951 "impl_name": "ssl", 00:16:53.951 "recv_buf_size": 4096, 00:16:53.951 "send_buf_size": 4096, 00:16:53.951 "tls_version": 0, 00:16:53.951 "zerocopy_threshold": 0 00:16:53.951 } 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "method": "sock_impl_set_options", 00:16:53.951 "params": { 00:16:53.951 "enable_ktls": false, 00:16:53.951 "enable_placement_id": 0, 00:16:53.951 "enable_quickack": false, 00:16:53.951 "enable_recv_pipe": true, 00:16:53.951 "enable_zerocopy_send_client": false, 00:16:53.951 "enable_zerocopy_send_server": true, 00:16:53.951 "impl_name": "posix", 00:16:53.951 "recv_buf_size": 2097152, 00:16:53.951 "send_buf_size": 2097152, 00:16:53.951 "tls_version": 0, 00:16:53.951 "zerocopy_threshold": 0 00:16:53.951 } 00:16:53.951 } 00:16:53.951 ] 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "subsystem": "vmd", 00:16:53.951 "config": [] 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "subsystem": "accel", 00:16:53.951 "config": [ 00:16:53.951 { 00:16:53.951 "method": "accel_set_options", 00:16:53.951 "params": { 00:16:53.951 "buf_count": 2048, 00:16:53.951 "large_cache_size": 16, 00:16:53.951 "sequence_count": 2048, 00:16:53.951 "small_cache_size": 128, 00:16:53.951 "task_count": 2048 00:16:53.951 } 00:16:53.951 } 00:16:53.951 ] 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "subsystem": "bdev", 00:16:53.951 "config": [ 00:16:53.951 { 00:16:53.951 "method": "bdev_set_options", 00:16:53.951 "params": { 00:16:53.951 "bdev_auto_examine": true, 00:16:53.951 "bdev_io_cache_size": 256, 00:16:53.951 "bdev_io_pool_size": 65535, 00:16:53.951 "iobuf_large_cache_size": 16, 00:16:53.951 "iobuf_small_cache_size": 128 00:16:53.951 } 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "method": "bdev_raid_set_options", 00:16:53.951 "params": { 00:16:53.951 "process_max_bandwidth_mb_sec": 0, 00:16:53.951 "process_window_size_kb": 1024 00:16:53.951 } 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "method": "bdev_iscsi_set_options", 00:16:53.951 "params": { 00:16:53.951 "timeout_sec": 30 00:16:53.951 } 00:16:53.951 }, 00:16:53.951 { 00:16:53.951 "method": "bdev_nvme_set_options", 00:16:53.951 "params": { 00:16:53.951 "action_on_timeout": "none", 00:16:53.952 "allow_accel_sequence": false, 00:16:53.952 "arbitration_burst": 0, 00:16:53.952 "bdev_retry_count": 3, 00:16:53.952 "ctrlr_loss_timeout_sec": 0, 00:16:53.952 "delay_cmd_submit": true, 00:16:53.952 "dhchap_dhgroups": [ 00:16:53.952 "null", 00:16:53.952 "ffdhe2048", 00:16:53.952 "ffdhe3072", 00:16:53.952 "ffdhe4096", 00:16:53.952 "ffdhe6144", 00:16:53.952 "ffdhe8192" 00:16:53.952 ], 00:16:53.952 "dhchap_digests": [ 00:16:53.952 "sha256", 00:16:53.952 "sha384", 00:16:53.952 "sha512" 00:16:53.952 ], 00:16:53.952 "disable_auto_failback": false, 00:16:53.952 "fast_io_fail_timeout_sec": 0, 00:16:53.952 "generate_uuids": false, 00:16:53.952 "high_priority_weight": 0, 00:16:53.952 "io_path_stat": false, 00:16:53.952 "io_queue_requests": 0, 00:16:53.952 "keep_alive_timeout_ms": 10000, 00:16:53.952 "low_priority_weight": 0, 00:16:53.952 "medium_priority_weight": 0, 00:16:53.952 "nvme_adminq_poll_period_us": 10000, 00:16:53.952 "nvme_error_stat": false, 00:16:53.952 "nvme_ioq_poll_period_us": 0, 00:16:53.952 "rdma_cm_event_timeout_ms": 0, 00:16:53.952 "rdma_max_cq_size": 0, 00:16:53.952 "rdma_srq_size": 0, 00:16:53.952 "reconnect_delay_sec": 0, 00:16:53.952 "timeout_admin_us": 0, 00:16:53.952 "timeout_us": 0, 00:16:53.952 "transport_ack_timeout": 0, 00:16:53.952 "transport_retry_count": 4, 00:16:53.952 "transport_tos": 0 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "bdev_nvme_set_hotplug", 00:16:53.952 "params": { 00:16:53.952 "enable": false, 00:16:53.952 "period_us": 100000 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "bdev_malloc_create", 00:16:53.952 "params": { 00:16:53.952 "block_size": 4096, 00:16:53.952 "dif_is_head_of_md": false, 00:16:53.952 "dif_pi_format": 0, 00:16:53.952 "dif_type": 0, 00:16:53.952 "md_size": 0, 00:16:53.952 "name": "malloc0", 00:16:53.952 "num_blocks": 8192, 00:16:53.952 "optimal_io_boundary": 0, 00:16:53.952 "physical_block_size": 4096, 00:16:53.952 "uuid": "09a30af8-8365-4d4d-830e-ac571142954a" 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "bdev_wait_for_examine" 00:16:53.952 } 00:16:53.952 ] 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "subsystem": "nbd", 00:16:53.952 "config": [] 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "subsystem": "scheduler", 00:16:53.952 "config": [ 00:16:53.952 { 00:16:53.952 "method": "framework_set_scheduler", 00:16:53.952 "params": { 00:16:53.952 "name": "static" 00:16:53.952 } 00:16:53.952 } 00:16:53.952 ] 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "subsystem": "nvmf", 00:16:53.952 "config": [ 00:16:53.952 { 00:16:53.952 "method": "nvmf_set_config", 00:16:53.952 "params": { 00:16:53.952 "admin_cmd_passthru": { 00:16:53.952 "identify_ctrlr": false 00:16:53.952 }, 00:16:53.952 "dhchap_dhgroups": [ 00:16:53.952 "null", 00:16:53.952 "ffdhe2048", 00:16:53.952 "ffdhe3072", 00:16:53.952 "ffdhe4096", 00:16:53.952 "ffdhe6144", 00:16:53.952 "ffdhe8192" 00:16:53.952 ], 00:16:53.952 "dhchap_digests": [ 00:16:53.952 "sha256", 00:16:53.952 "sha384", 00:16:53.952 "sha512" 00:16:53.952 ], 00:16:53.952 "discovery_filter": "match_any" 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "nvmf_set_max_subsystems", 00:16:53.952 "params": { 00:16:53.952 "max_subsystems": 1024 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "nvmf_set_crdt", 00:16:53.952 "params": { 00:16:53.952 "crdt1": 0, 00:16:53.952 "crdt2": 0, 00:16:53.952 "crdt3": 0 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "nvmf_create_transport", 00:16:53.952 "params": { 00:16:53.952 "abort_timeout_sec": 1, 00:16:53.952 "ack_timeout": 0, 00:16:53.952 "buf_cache_size": 4294967295, 00:16:53.952 "c2h_success": false, 00:16:53.952 "data_wr_pool_size": 0, 00:16:53.952 "dif_insert_or_strip": false, 00:16:53.952 "in_capsule_data_size": 4096, 00:16:53.952 "io_unit_size": 131072, 00:16:53.952 "max_aq_depth": 128, 00:16:53.952 "max_io_qpairs_per_ctrlr": 127, 00:16:53.952 "max_io_size": 131072, 00:16:53.952 "max_queue_depth": 128, 00:16:53.952 "num_shared_buffers": 511, 00:16:53.952 "sock_priority": 0, 00:16:53.952 "trtype": "TCP", 00:16:53.952 "zcopy": false 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "nvmf_create_subsystem", 00:16:53.952 "params": { 00:16:53.952 "allow_any_host": false, 00:16:53.952 "ana_reporting": false, 00:16:53.952 "max_cntlid": 65519, 00:16:53.952 "max_namespaces": 32, 00:16:53.952 "min_cntlid": 1, 00:16:53.952 "model_number": "SPDK bdev Controller", 00:16:53.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.952 "serial_number": "00000000000000000000" 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "nvmf_subsystem_add_host", 00:16:53.952 "params": { 00:16:53.952 "host": "nqn.2016-06.io.spdk:host1", 00:16:53.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.952 "psk": "key0" 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "nvmf_subsystem_add_ns", 00:16:53.952 "params": { 00:16:53.952 "namespace": { 00:16:53.952 "bdev_name": "malloc0", 00:16:53.952 "nguid": "09A30AF883654D4D830EAC571142954A", 00:16:53.952 "no_auto_visible": false, 00:16:53.952 "nsid": 1, 00:16:53.952 "uuid": "09a30af8-8365-4d4d-830e-ac571142954a" 00:16:53.952 }, 00:16:53.952 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:53.952 } 00:16:53.952 }, 00:16:53.952 { 00:16:53.952 "method": "nvmf_subsystem_add_listener", 00:16:53.952 "params": { 00:16:53.952 "listen_address": { 00:16:53.952 "adrfam": "IPv4", 00:16:53.952 "traddr": "10.0.0.3", 00:16:53.952 "trsvcid": "4420", 00:16:53.952 "trtype": "TCP" 00:16:53.952 }, 00:16:53.952 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.952 "secure_channel": false, 00:16:53.952 "sock_impl": "ssl" 00:16:53.952 } 00:16:53.952 } 00:16:53.952 ] 00:16:53.952 } 00:16:53.952 ] 00:16:53.952 }' 00:16:53.952 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:54.210 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:54.210 "subsystems": [ 00:16:54.210 { 00:16:54.210 "subsystem": "keyring", 00:16:54.211 "config": [ 00:16:54.211 { 00:16:54.211 "method": "keyring_file_add_key", 00:16:54.211 "params": { 00:16:54.211 "name": "key0", 00:16:54.211 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:54.211 } 00:16:54.211 } 00:16:54.211 ] 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "subsystem": "iobuf", 00:16:54.211 "config": [ 00:16:54.211 { 00:16:54.211 "method": "iobuf_set_options", 00:16:54.211 "params": { 00:16:54.211 "enable_numa": false, 00:16:54.211 "large_bufsize": 135168, 00:16:54.211 "large_pool_count": 1024, 00:16:54.211 "small_bufsize": 8192, 00:16:54.211 "small_pool_count": 8192 00:16:54.211 } 00:16:54.211 } 00:16:54.211 ] 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "subsystem": "sock", 00:16:54.211 "config": [ 00:16:54.211 { 00:16:54.211 "method": "sock_set_default_impl", 00:16:54.211 "params": { 00:16:54.211 "impl_name": "posix" 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "sock_impl_set_options", 00:16:54.211 "params": { 00:16:54.211 "enable_ktls": false, 00:16:54.211 "enable_placement_id": 0, 00:16:54.211 "enable_quickack": false, 00:16:54.211 "enable_recv_pipe": true, 00:16:54.211 "enable_zerocopy_send_client": false, 00:16:54.211 "enable_zerocopy_send_server": true, 00:16:54.211 "impl_name": "ssl", 00:16:54.211 "recv_buf_size": 4096, 00:16:54.211 "send_buf_size": 4096, 00:16:54.211 "tls_version": 0, 00:16:54.211 "zerocopy_threshold": 0 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "sock_impl_set_options", 00:16:54.211 "params": { 00:16:54.211 "enable_ktls": false, 00:16:54.211 "enable_placement_id": 0, 00:16:54.211 "enable_quickack": false, 00:16:54.211 "enable_recv_pipe": true, 00:16:54.211 "enable_zerocopy_send_client": false, 00:16:54.211 "enable_zerocopy_send_server": true, 00:16:54.211 "impl_name": "posix", 00:16:54.211 "recv_buf_size": 2097152, 00:16:54.211 "send_buf_size": 2097152, 00:16:54.211 "tls_version": 0, 00:16:54.211 "zerocopy_threshold": 0 00:16:54.211 } 00:16:54.211 } 00:16:54.211 ] 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "subsystem": "vmd", 00:16:54.211 "config": [] 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "subsystem": "accel", 00:16:54.211 "config": [ 00:16:54.211 { 00:16:54.211 "method": "accel_set_options", 00:16:54.211 "params": { 00:16:54.211 "buf_count": 2048, 00:16:54.211 "large_cache_size": 16, 00:16:54.211 "sequence_count": 2048, 00:16:54.211 "small_cache_size": 128, 00:16:54.211 "task_count": 2048 00:16:54.211 } 00:16:54.211 } 00:16:54.211 ] 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "subsystem": "bdev", 00:16:54.211 "config": [ 00:16:54.211 { 00:16:54.211 "method": "bdev_set_options", 00:16:54.211 "params": { 00:16:54.211 "bdev_auto_examine": true, 00:16:54.211 "bdev_io_cache_size": 256, 00:16:54.211 "bdev_io_pool_size": 65535, 00:16:54.211 "iobuf_large_cache_size": 16, 00:16:54.211 "iobuf_small_cache_size": 128 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "bdev_raid_set_options", 00:16:54.211 "params": { 00:16:54.211 "process_max_bandwidth_mb_sec": 0, 00:16:54.211 "process_window_size_kb": 1024 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "bdev_iscsi_set_options", 00:16:54.211 "params": { 00:16:54.211 "timeout_sec": 30 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "bdev_nvme_set_options", 00:16:54.211 "params": { 00:16:54.211 "action_on_timeout": "none", 00:16:54.211 "allow_accel_sequence": false, 00:16:54.211 "arbitration_burst": 0, 00:16:54.211 "bdev_retry_count": 3, 00:16:54.211 "ctrlr_loss_timeout_sec": 0, 00:16:54.211 "delay_cmd_submit": true, 00:16:54.211 "dhchap_dhgroups": [ 00:16:54.211 "null", 00:16:54.211 "ffdhe2048", 00:16:54.211 "ffdhe3072", 00:16:54.211 "ffdhe4096", 00:16:54.211 "ffdhe6144", 00:16:54.211 "ffdhe8192" 00:16:54.211 ], 00:16:54.211 "dhchap_digests": [ 00:16:54.211 "sha256", 00:16:54.211 "sha384", 00:16:54.211 "sha512" 00:16:54.211 ], 00:16:54.211 "disable_auto_failback": false, 00:16:54.211 "fast_io_fail_timeout_sec": 0, 00:16:54.211 "generate_uuids": false, 00:16:54.211 "high_priority_weight": 0, 00:16:54.211 "io_path_stat": false, 00:16:54.211 "io_queue_requests": 512, 00:16:54.211 "keep_alive_timeout_ms": 10000, 00:16:54.211 "low_priority_weight": 0, 00:16:54.211 "medium_priority_weight": 0, 00:16:54.211 "nvme_adminq_poll_period_us": 10000, 00:16:54.211 "nvme_error_stat": false, 00:16:54.211 "nvme_ioq_poll_period_us": 0, 00:16:54.211 "rdma_cm_event_timeout_ms": 0, 00:16:54.211 "rdma_max_cq_size": 0, 00:16:54.211 "rdma_srq_size": 0, 00:16:54.211 "reconnect_delay_sec": 0, 00:16:54.211 "timeout_admin_us": 0, 00:16:54.211 "timeout_us": 0, 00:16:54.211 "transport_ack_timeout": 0, 00:16:54.211 "transport_retry_count": 4, 00:16:54.211 "transport_tos": 0 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "bdev_nvme_attach_controller", 00:16:54.211 "params": { 00:16:54.211 "adrfam": "IPv4", 00:16:54.211 "ctrlr_loss_timeout_sec": 0, 00:16:54.211 "ddgst": false, 00:16:54.211 "fast_io_fail_timeout_sec": 0, 00:16:54.211 "hdgst": false, 00:16:54.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.211 "multipath": "multipath", 00:16:54.211 "name": "nvme0", 00:16:54.211 "prchk_guard": false, 00:16:54.211 "prchk_reftag": false, 00:16:54.211 "psk": "key0", 00:16:54.211 "reconnect_delay_sec": 0, 00:16:54.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.211 "traddr": "10.0.0.3", 00:16:54.211 "trsvcid": "4420", 00:16:54.211 "trtype": "TCP" 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "bdev_nvme_set_hotplug", 00:16:54.211 "params": { 00:16:54.211 "enable": false, 00:16:54.211 "period_us": 100000 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "bdev_enable_histogram", 00:16:54.211 "params": { 00:16:54.211 "enable": true, 00:16:54.211 "name": "nvme0n1" 00:16:54.211 } 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "method": "bdev_wait_for_examine" 00:16:54.211 } 00:16:54.211 ] 00:16:54.211 }, 00:16:54.211 { 00:16:54.211 "subsystem": "nbd", 00:16:54.211 "config": [] 00:16:54.211 } 00:16:54.211 ] 00:16:54.211 }' 00:16:54.211 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84205 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84205 ']' 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84205 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84205 00:16:54.470 killing process with pid 84205 00:16:54.470 Received shutdown signal, test time was about 1.000000 seconds 00:16:54.470 00:16:54.470 Latency(us) 00:16:54.470 [2024-12-06T17:16:36.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.470 [2024-12-06T17:16:36.768Z] =================================================================================================================== 00:16:54.470 [2024-12-06T17:16:36.768Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84205' 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84205 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84205 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84174 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84174 ']' 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84174 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84174 00:16:54.470 killing process with pid 84174 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84174' 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84174 00:16:54.470 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84174 00:16:54.729 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:54.729 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:54.729 "subsystems": [ 00:16:54.729 { 00:16:54.729 "subsystem": "keyring", 00:16:54.729 "config": [ 00:16:54.729 { 00:16:54.729 "method": "keyring_file_add_key", 00:16:54.729 "params": { 00:16:54.729 "name": "key0", 00:16:54.729 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:54.729 } 00:16:54.729 } 00:16:54.729 ] 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "subsystem": "iobuf", 00:16:54.729 "config": [ 00:16:54.729 { 00:16:54.729 "method": "iobuf_set_options", 00:16:54.729 "params": { 00:16:54.729 "enable_numa": false, 00:16:54.729 "large_bufsize": 135168, 00:16:54.729 "large_pool_count": 1024, 00:16:54.729 "small_bufsize": 8192, 00:16:54.729 "small_pool_count": 8192 00:16:54.729 } 00:16:54.729 } 00:16:54.729 ] 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "subsystem": "sock", 00:16:54.729 "config": [ 00:16:54.729 { 00:16:54.729 "method": "sock_set_default_impl", 00:16:54.729 "params": { 00:16:54.729 "impl_name": "posix" 00:16:54.729 } 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "method": "sock_impl_set_options", 00:16:54.729 "params": { 00:16:54.729 "enable_ktls": false, 00:16:54.729 "enable_placement_id": 0, 00:16:54.729 "enable_quickack": false, 00:16:54.729 "enable_recv_pipe": true, 00:16:54.729 "enable_zerocopy_send_client": false, 00:16:54.729 "enable_zerocopy_send_server": true, 00:16:54.729 "impl_name": "ssl", 00:16:54.729 "recv_buf_size": 4096, 00:16:54.729 "send_buf_size": 4096, 00:16:54.729 "tls_version": 0, 00:16:54.729 "zerocopy_threshold": 0 00:16:54.729 } 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "method": "sock_impl_set_options", 00:16:54.729 "params": { 00:16:54.729 "enable_ktls": false, 00:16:54.729 "enable_placement_id": 0, 00:16:54.729 "enable_quickack": false, 00:16:54.729 "enable_recv_pipe": true, 00:16:54.729 "enable_zerocopy_send_client": false, 00:16:54.729 "enable_zerocopy_send_server": true, 00:16:54.729 "impl_name": "posix", 00:16:54.729 "recv_buf_size": 2097152, 00:16:54.729 "send_buf_size": 2097152, 00:16:54.729 "tls_version": 0, 00:16:54.729 "zerocopy_threshold": 0 00:16:54.729 } 00:16:54.729 } 00:16:54.729 ] 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "subsystem": "vmd", 00:16:54.729 "config": [] 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "subsystem": "accel", 00:16:54.729 "config": [ 00:16:54.729 { 00:16:54.729 "method": "accel_set_options", 00:16:54.729 "params": { 00:16:54.729 "buf_count": 2048, 00:16:54.729 "large_cache_size": 16, 00:16:54.729 "sequence_count": 2048, 00:16:54.729 "small_cache_size": 128, 00:16:54.729 "task_count": 2048 00:16:54.729 } 00:16:54.729 } 00:16:54.729 ] 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "subsystem": "bdev", 00:16:54.729 "config": [ 00:16:54.729 { 00:16:54.729 "method": "bdev_set_options", 00:16:54.729 "params": { 00:16:54.729 "bdev_auto_examine": true, 00:16:54.729 "bdev_io_cache_size": 256, 00:16:54.729 "bdev_io_pool_size": 65535, 00:16:54.729 "iobuf_large_cache_size": 16, 00:16:54.729 "iobuf_small_cache_size": 128 00:16:54.729 } 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "method": "bdev_raid_set_options", 00:16:54.729 "params": { 00:16:54.729 "process_max_bandwidth_mb_sec": 0, 00:16:54.729 "process_window_size_kb": 1024 00:16:54.729 } 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "method": "bdev_iscsi_set_options", 00:16:54.729 "params": { 00:16:54.729 "timeout_sec": 30 00:16:54.729 } 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "method": "bdev_nvme_set_options", 00:16:54.729 "params": { 00:16:54.729 "action_on_timeout": "none", 00:16:54.729 "allow_accel_sequence": false, 00:16:54.729 "arbitration_burst": 0, 00:16:54.729 "bdev_retry_count": 3, 00:16:54.729 "ctrlr_loss_timeout_sec": 0, 00:16:54.729 "delay_cmd_submit": true, 00:16:54.729 "dhchap_dhgroups": [ 00:16:54.729 "null", 00:16:54.729 "ffdhe2048", 00:16:54.729 "ffdhe3072", 00:16:54.729 "ffdhe4096", 00:16:54.729 "ffdhe6144", 00:16:54.729 "ffdhe8192" 00:16:54.729 ], 00:16:54.729 "dhchap_digests": [ 00:16:54.729 "sha256", 00:16:54.729 "sha384", 00:16:54.729 "sha512" 00:16:54.729 ], 00:16:54.729 "disable_auto_failback": false, 00:16:54.729 "fast_io_fail_timeout_sec": 0, 00:16:54.729 "generate_uuids": false, 00:16:54.729 "high_priority_weight": 0, 00:16:54.729 "io_path_stat": false, 00:16:54.729 "io_queue_requests": 0, 00:16:54.729 "keep_alive_timeout_ms": 10000, 00:16:54.729 "low_priority_weight": 0, 00:16:54.729 "medium_priority_weight": 0, 00:16:54.729 "nvme_adminq_poll_period_us": 10000, 00:16:54.729 "nvme_error_stat": false, 00:16:54.729 "nvme_ioq_poll_period_us": 0, 00:16:54.729 "rdma_cm_event_timeout_ms": 0, 00:16:54.729 "rdma_max_cq_size": 0, 00:16:54.729 "rdma_srq_size": 0, 00:16:54.729 "reconnect_delay_sec": 0, 00:16:54.729 "timeout_admin_us": 0, 00:16:54.729 "timeout_us": 0, 00:16:54.729 "transport_ack_timeout": 0, 00:16:54.729 "transport_retry_count": 4, 00:16:54.729 "transport_tos": 0 00:16:54.729 } 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "method": "bdev_nvme_set_hotplug", 00:16:54.729 "params": { 00:16:54.729 "enable": false, 00:16:54.729 "period_us": 100000 00:16:54.729 } 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "method": "bdev_malloc_create", 00:16:54.729 "params": { 00:16:54.729 "block_size": 4096, 00:16:54.729 "dif_is_head_of_md": false, 00:16:54.729 "dif_pi_format": 0, 00:16:54.729 "dif_type": 0, 00:16:54.729 "md_size": 0, 00:16:54.729 "name": "malloc0", 00:16:54.729 "num_blocks": 8192, 00:16:54.729 "optimal_io_boundary": 0, 00:16:54.729 "physical_block_size": 4096, 00:16:54.729 "uuid": "09a30af8-8365-4d4d-830e-ac571142954a" 00:16:54.729 } 00:16:54.729 }, 00:16:54.729 { 00:16:54.729 "method": "bdev_wait_for_examine" 00:16:54.729 } 00:16:54.730 ] 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "subsystem": "nbd", 00:16:54.730 "config": [] 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "subsystem": "scheduler", 00:16:54.730 "config": [ 00:16:54.730 { 00:16:54.730 "method": "framework_set_scheduler", 00:16:54.730 "params": { 00:16:54.730 "name": "static" 00:16:54.730 } 00:16:54.730 } 00:16:54.730 ] 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "subsystem": "nvmf", 00:16:54.730 "config": [ 00:16:54.730 { 00:16:54.730 "method": "nvmf_set_config", 00:16:54.730 "params": { 00:16:54.730 "admin_cmd_passthru": { 00:16:54.730 "identify_ctrlr": false 00:16:54.730 }, 00:16:54.730 "dhchap_dhgroups": [ 00:16:54.730 "null", 00:16:54.730 "ffdhe2048", 00:16:54.730 "ffdhe3072", 00:16:54.730 "ffdhe4096", 00:16:54.730 "ffdhe6144", 00:16:54.730 "ffdhe8192" 00:16:54.730 ], 00:16:54.730 "dhchap_digests": [ 00:16:54.730 "sha256", 00:16:54.730 "sha384", 00:16:54.730 "sha512" 00:16:54.730 ], 00:16:54.730 "discovery_filter": "match_any" 00:16:54.730 } 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "method": "nvmf_set_max_subsystems", 00:16:54.730 "params": { 00:16:54.730 "max_subsystems": 1024 00:16:54.730 } 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "method": "nvmf_set_crdt", 00:16:54.730 "params": { 00:16:54.730 "crdt1": 0, 00:16:54.730 "crdt2": 0, 00:16:54.730 "crdt3": 0 00:16:54.730 } 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "method": "nvmf_create_transport", 00:16:54.730 "params": { 00:16:54.730 "abort_timeout_sec": 1, 00:16:54.730 "ack_timeout": 0, 00:16:54.730 "buf_cache_size": 4294967295, 00:16:54.730 "c2h_success": false, 00:16:54.730 "data_wr_pool_size": 0, 00:16:54.730 "dif_insert_or_strip": false, 00:16:54.730 "in_capsule_data_size": 4096, 00:16:54.730 "io_unit_size": 131072, 00:16:54.730 "max_aq_depth": 128, 00:16:54.730 "max_io_qpairs_per_ctrlr": 127, 00:16:54.730 "max_io_size": 131072, 00:16:54.730 "max_queue_depth": 128, 00:16:54.730 "num_shared_buffers": 511, 00:16:54.730 "sock_priority": 0, 00:16:54.730 "trtype": "TCP", 00:16:54.730 "zcopy": false 00:16:54.730 } 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "method": "nvmf_create_subsystem", 00:16:54.730 "params": { 00:16:54.730 "allow_any_host": false, 00:16:54.730 "ana_reporting": false, 00:16:54.730 "max_cntlid": 65519, 00:16:54.730 "max_namespaces": 32, 00:16:54.730 "min_cntlid": 1, 00:16:54.730 "model_number": "SPDK bdev Controller", 00:16:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.730 "serial_number": "00000000000000000000" 00:16:54.730 } 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "method": "nvmf_subsystem_add_host", 00:16:54.730 "params": { 00:16:54.730 "host": "nqn.2016-06.io.spdk:host1", 00:16:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.730 "psk": "key0" 00:16:54.730 } 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "method": "nvmf_subsystem_add_ns", 00:16:54.730 "params": { 00:16:54.730 "namespace": { 00:16:54.730 "bdev_name": "malloc0", 00:16:54.730 "nguid": "09A30AF883654D4D830EAC571142954A", 00:16:54.730 "no_auto_visible": false, 00:16:54.730 "nsid": 1, 00:16:54.730 "uuid": "09a30af8-8365-4d4d-830e-ac571142954a" 00:16:54.730 }, 00:16:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:16:54.730 } 00:16:54.730 }, 00:16:54.730 { 00:16:54.730 "method": "nvmf_subsystem_add_listener", 00:16:54.730 "params": { 00:16:54.730 "listen_address": { 00:16:54.730 "adrfam": "IPv4", 00:16:54.730 "traddr": "10.0.0.3", 00:16:54.730 "trsvcid": "4420", 00:16:54.730 "trtype": "TCP" 00:16:54.730 }, 00:16:54.730 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.730 "secure_channel": false, 00:16:54.730 "sock_impl": "ssl" 00:16:54.730 } 00:16:54.730 } 00:16:54.730 ] 00:16:54.730 } 00:16:54.730 ] 00:16:54.730 }' 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84283 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84283 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84283 ']' 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.730 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.730 [2024-12-06 17:16:36.938137] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:54.730 [2024-12-06 17:16:36.938558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.056 [2024-12-06 17:16:37.085687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.056 [2024-12-06 17:16:37.119154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.056 [2024-12-06 17:16:37.119217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.056 [2024-12-06 17:16:37.119230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.056 [2024-12-06 17:16:37.119238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.056 [2024-12-06 17:16:37.119245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.056 [2024-12-06 17:16:37.119643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.056 [2024-12-06 17:16:37.313811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.056 [2024-12-06 17:16:37.345864] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:55.056 [2024-12-06 17:16:37.346288] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84327 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84327 /var/tmp/bdevperf.sock 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84327 ']' 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:16:55.999 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:16:55.999 "subsystems": [ 00:16:55.999 { 00:16:55.999 "subsystem": "keyring", 00:16:55.999 "config": [ 00:16:55.999 { 00:16:55.999 "method": "keyring_file_add_key", 00:16:55.999 "params": { 00:16:55.999 "name": "key0", 00:16:55.999 "path": "/tmp/tmp.88WQ6ZvsQP" 00:16:55.999 } 00:16:55.999 } 00:16:55.999 ] 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "subsystem": "iobuf", 00:16:55.999 "config": [ 00:16:55.999 { 00:16:55.999 "method": "iobuf_set_options", 00:16:55.999 "params": { 00:16:55.999 "enable_numa": false, 00:16:55.999 "large_bufsize": 135168, 00:16:55.999 "large_pool_count": 1024, 00:16:55.999 "small_bufsize": 8192, 00:16:55.999 "small_pool_count": 8192 00:16:55.999 } 00:16:55.999 } 00:16:55.999 ] 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "subsystem": "sock", 00:16:55.999 "config": [ 00:16:55.999 { 00:16:55.999 "method": "sock_set_default_impl", 00:16:55.999 "params": { 00:16:55.999 "impl_name": "posix" 00:16:55.999 } 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "method": "sock_impl_set_options", 00:16:55.999 "params": { 00:16:55.999 "enable_ktls": false, 00:16:55.999 "enable_placement_id": 0, 00:16:55.999 "enable_quickack": false, 00:16:55.999 "enable_recv_pipe": true, 00:16:55.999 "enable_zerocopy_send_client": false, 00:16:55.999 "enable_zerocopy_send_server": true, 00:16:55.999 "impl_name": "ssl", 00:16:55.999 "recv_buf_size": 4096, 00:16:55.999 "send_buf_size": 4096, 00:16:55.999 "tls_version": 0, 00:16:55.999 "zerocopy_threshold": 0 00:16:55.999 } 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "method": "sock_impl_set_options", 00:16:55.999 "params": { 00:16:55.999 "enable_ktls": false, 00:16:55.999 "enable_placement_id": 0, 00:16:55.999 "enable_quickack": false, 00:16:55.999 "enable_recv_pipe": true, 00:16:55.999 "enable_zerocopy_send_client": false, 00:16:55.999 "enable_zerocopy_send_server": true, 00:16:55.999 "impl_name": "posix", 00:16:55.999 "recv_buf_size": 2097152, 00:16:55.999 "send_buf_size": 2097152, 00:16:55.999 "tls_version": 0, 00:16:55.999 "zerocopy_threshold": 0 00:16:55.999 } 00:16:55.999 } 00:16:55.999 ] 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "subsystem": "vmd", 00:16:55.999 "config": [] 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "subsystem": "accel", 00:16:55.999 "config": [ 00:16:55.999 { 00:16:55.999 "method": "accel_set_options", 00:16:55.999 "params": { 00:16:55.999 "buf_count": 2048, 00:16:55.999 "large_cache_size": 16, 00:16:55.999 "sequence_count": 2048, 00:16:55.999 "small_cache_size": 128, 00:16:55.999 "task_count": 2048 00:16:55.999 } 00:16:55.999 } 00:16:55.999 ] 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "subsystem": "bdev", 00:16:55.999 "config": [ 00:16:55.999 { 00:16:55.999 "method": "bdev_set_options", 00:16:55.999 "params": { 00:16:55.999 "bdev_auto_examine": true, 00:16:55.999 "bdev_io_cache_size": 256, 00:16:55.999 "bdev_io_pool_size": 65535, 00:16:55.999 "iobuf_large_cache_size": 16, 00:16:55.999 "iobuf_small_cache_size": 128 00:16:55.999 } 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "method": "bdev_raid_set_options", 00:16:55.999 "params": { 00:16:55.999 "process_max_bandwidth_mb_sec": 0, 00:16:55.999 "process_window_size_kb": 1024 00:16:55.999 } 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "method": "bdev_iscsi_set_options", 00:16:55.999 "params": { 00:16:55.999 "timeout_sec": 30 00:16:55.999 } 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "method": "bdev_nvme_set_options", 00:16:55.999 "params": { 00:16:55.999 "action_on_timeout": "none", 00:16:55.999 "allow_accel_sequence": false, 00:16:55.999 "arbitration_burst": 0, 00:16:55.999 "bdev_retry_count": 3, 00:16:55.999 "ctrlr_loss_timeout_sec": 0, 00:16:55.999 "delay_cmd_submit": true, 00:16:55.999 "dhchap_dhgroups": [ 00:16:55.999 "null", 00:16:55.999 "ffdhe2048", 00:16:55.999 "ffdhe3072", 00:16:55.999 "ffdhe4096", 00:16:55.999 "ffdhe6144", 00:16:55.999 "ffdhe8192" 00:16:55.999 ], 00:16:55.999 "dhchap_digests": [ 00:16:55.999 "sha256", 00:16:55.999 "sha384", 00:16:55.999 "sha512" 00:16:55.999 ], 00:16:55.999 "disable_auto_failback": false, 00:16:55.999 "fast_io_fail_timeout_sec": 0, 00:16:55.999 "generate_uuids": false, 00:16:55.999 "high_priority_weight": 0, 00:16:55.999 "io_path_stat": false, 00:16:55.999 "io_queue_requests": 512, 00:16:55.999 "keep_alive_timeout_ms": 10000, 00:16:55.999 "low_priority_weight": 0, 00:16:55.999 "medium_priority_weight": 0, 00:16:55.999 "nvme_adminq_poll_period_us": 10000, 00:16:55.999 "nvme_error_stat": false, 00:16:55.999 "nvme_ioq_poll_period_us": 0, 00:16:55.999 "rdma_cm_event_timeout_ms": 0, 00:16:55.999 "rdma_max_cq_size": 0, 00:16:55.999 "rdma_srq_size": 0, 00:16:55.999 "reconnect_delay_sec": 0, 00:16:55.999 "timeout_admin_us": 0, 00:16:55.999 "timeout_us": 0, 00:16:55.999 "transport_ack_timeout": 0, 00:16:55.999 "transport_retry_count": 4, 00:16:55.999 "transport_tos": 0 00:16:55.999 } 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "method": "bdev_nvme_attach_controller", 00:16:55.999 "params": { 00:16:55.999 "adrfam": "IPv4", 00:16:55.999 "ctrlr_loss_timeout_sec": 0, 00:16:55.999 "ddgst": false, 00:16:55.999 "fast_io_fail_timeout_sec": 0, 00:16:55.999 "hdgst": false, 00:16:55.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:55.999 "multipath": "multipath", 00:16:55.999 "name": "nvme0", 00:16:55.999 "prchk_guard": false, 00:16:55.999 "prchk_reftag": false, 00:16:55.999 "psk": "key0", 00:16:55.999 "reconnect_delay_sec": 0, 00:16:55.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:55.999 "traddr": "10.0.0.3", 00:16:55.999 "trsvcid": "4420", 00:16:55.999 "trtype": "TCP" 00:16:55.999 } 00:16:55.999 }, 00:16:55.999 { 00:16:55.999 "method": "bdev_nvme_set_hotplug", 00:16:55.999 "params": { 00:16:56.000 "enable": false, 00:16:56.000 "period_us": 100000 00:16:56.000 } 00:16:56.000 }, 00:16:56.000 { 00:16:56.000 "method": "bdev_enable_histogram", 00:16:56.000 "params": { 00:16:56.000 "enable": true, 00:16:56.000 "name": "nvme0n1" 00:16:56.000 } 00:16:56.000 }, 00:16:56.000 { 00:16:56.000 "method": "bdev_wait_for_examine" 00:16:56.000 } 00:16:56.000 ] 00:16:56.000 }, 00:16:56.000 { 00:16:56.000 "subsystem": "nbd", 00:16:56.000 "config": [] 00:16:56.000 } 00:16:56.000 ] 00:16:56.000 }' 00:16:56.000 [2024-12-06 17:16:38.105392] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:56.000 [2024-12-06 17:16:38.105487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84327 ] 00:16:56.000 [2024-12-06 17:16:38.250833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.000 [2024-12-06 17:16:38.285610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.258 [2024-12-06 17:16:38.424233] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.516 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.516 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:56.516 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:56.516 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:16:56.775 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.775 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:56.775 Running I/O for 1 seconds... 00:16:57.713 3594.00 IOPS, 14.04 MiB/s 00:16:57.713 Latency(us) 00:16:57.713 [2024-12-06T17:16:40.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.713 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:57.713 Verification LBA range: start 0x0 length 0x2000 00:16:57.713 nvme0n1 : 1.02 3661.65 14.30 0.00 0.00 34636.27 5868.45 38368.35 00:16:57.713 [2024-12-06T17:16:40.011Z] =================================================================================================================== 00:16:57.713 [2024-12-06T17:16:40.011Z] Total : 3661.65 14.30 0.00 0.00 34636.27 5868.45 38368.35 00:16:57.713 { 00:16:57.713 "results": [ 00:16:57.713 { 00:16:57.713 "job": "nvme0n1", 00:16:57.713 "core_mask": "0x2", 00:16:57.713 "workload": "verify", 00:16:57.713 "status": "finished", 00:16:57.713 "verify_range": { 00:16:57.713 "start": 0, 00:16:57.713 "length": 8192 00:16:57.713 }, 00:16:57.713 "queue_depth": 128, 00:16:57.713 "io_size": 4096, 00:16:57.713 "runtime": 1.016756, 00:16:57.713 "iops": 3661.6454685293224, 00:16:57.713 "mibps": 14.303302611442666, 00:16:57.713 "io_failed": 0, 00:16:57.713 "io_timeout": 0, 00:16:57.713 "avg_latency_us": 34636.267283959656, 00:16:57.713 "min_latency_us": 5868.450909090909, 00:16:57.713 "max_latency_us": 38368.34909090909 00:16:57.713 } 00:16:57.713 ], 00:16:57.713 "core_count": 1 00:16:57.713 } 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:16:57.713 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:57.713 nvmf_trace.0 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84327 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84327 ']' 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84327 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84327 00:16:57.972 killing process with pid 84327 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84327' 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84327 00:16:57.972 Received shutdown signal, test time was about 1.000000 seconds 00:16:57.972 00:16:57.972 Latency(us) 00:16:57.972 [2024-12-06T17:16:40.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:57.972 [2024-12-06T17:16:40.270Z] =================================================================================================================== 00:16:57.972 [2024-12-06T17:16:40.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84327 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.972 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:58.232 rmmod nvme_tcp 00:16:58.232 rmmod nvme_fabrics 00:16:58.232 rmmod nvme_keyring 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 84283 ']' 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 84283 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84283 ']' 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84283 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84283 00:16:58.232 killing process with pid 84283 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84283' 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84283 00:16:58.232 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84283 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:58.491 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.1gS3bvzoQX /tmp/tmp.SlYDaBArZ4 /tmp/tmp.88WQ6ZvsQP 00:16:58.750 ************************************ 00:16:58.750 END TEST nvmf_tls 00:16:58.750 ************************************ 00:16:58.750 00:16:58.750 real 1m22.999s 00:16:58.750 user 2m17.059s 00:16:58.750 sys 0m26.883s 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:58.750 ************************************ 00:16:58.750 START TEST nvmf_fips 00:16:58.750 ************************************ 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:16:58.750 * Looking for test storage... 00:16:58.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:16:58.750 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.010 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.011 --rc genhtml_branch_coverage=1 00:16:59.011 --rc genhtml_function_coverage=1 00:16:59.011 --rc genhtml_legend=1 00:16:59.011 --rc geninfo_all_blocks=1 00:16:59.011 --rc geninfo_unexecuted_blocks=1 00:16:59.011 00:16:59.011 ' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.011 --rc genhtml_branch_coverage=1 00:16:59.011 --rc genhtml_function_coverage=1 00:16:59.011 --rc genhtml_legend=1 00:16:59.011 --rc geninfo_all_blocks=1 00:16:59.011 --rc geninfo_unexecuted_blocks=1 00:16:59.011 00:16:59.011 ' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.011 --rc genhtml_branch_coverage=1 00:16:59.011 --rc genhtml_function_coverage=1 00:16:59.011 --rc genhtml_legend=1 00:16:59.011 --rc geninfo_all_blocks=1 00:16:59.011 --rc geninfo_unexecuted_blocks=1 00:16:59.011 00:16:59.011 ' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:59.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.011 --rc genhtml_branch_coverage=1 00:16:59.011 --rc genhtml_function_coverage=1 00:16:59.011 --rc genhtml_legend=1 00:16:59.011 --rc geninfo_all_blocks=1 00:16:59.011 --rc geninfo_unexecuted_blocks=1 00:16:59.011 00:16:59.011 ' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:59.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.011 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:16:59.012 Error setting digest 00:16:59.012 40924EE4467F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:16:59.012 40924EE4467F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:59.012 Cannot find device "nvmf_init_br" 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:59.012 Cannot find device "nvmf_init_br2" 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:16:59.012 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:59.272 Cannot find device "nvmf_tgt_br" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:59.272 Cannot find device "nvmf_tgt_br2" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:59.272 Cannot find device "nvmf_init_br" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:59.272 Cannot find device "nvmf_init_br2" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:59.272 Cannot find device "nvmf_tgt_br" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:59.272 Cannot find device "nvmf_tgt_br2" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:59.272 Cannot find device "nvmf_br" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:59.272 Cannot find device "nvmf_init_if" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:59.272 Cannot find device "nvmf_init_if2" 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:59.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:59.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:59.272 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:59.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:59.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:16:59.531 00:16:59.531 --- 10.0.0.3 ping statistics --- 00:16:59.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.531 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:59.531 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:59.531 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:16:59.531 00:16:59.531 --- 10.0.0.4 ping statistics --- 00:16:59.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.531 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:59.531 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:59.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:59.532 00:16:59.532 --- 10.0.0.1 ping statistics --- 00:16:59.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.532 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:59.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:16:59.532 00:16:59.532 --- 10.0.0.2 ping statistics --- 00:16:59.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.532 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=84650 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 84650 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84650 ']' 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.532 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:16:59.532 [2024-12-06 17:16:41.793965] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:16:59.532 [2024-12-06 17:16:41.794069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.790 [2024-12-06 17:16:41.940493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.790 [2024-12-06 17:16:41.971706] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.790 [2024-12-06 17:16:41.971763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.790 [2024-12-06 17:16:41.971774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.790 [2024-12-06 17:16:41.971782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.790 [2024-12-06 17:16:41.971789] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.790 [2024-12-06 17:16:41.972115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.790 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.790 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:16:59.790 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:59.790 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.790 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.CwO 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.CwO 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.CwO 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.CwO 00:17:00.048 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.306 [2024-12-06 17:16:42.357752] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.306 [2024-12-06 17:16:42.373702] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:00.306 [2024-12-06 17:16:42.373902] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:00.306 malloc0 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84691 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84691 /var/tmp/bdevperf.sock 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 84691 ']' 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.306 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:00.306 [2024-12-06 17:16:42.504196] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:00.306 [2024-12-06 17:16:42.504299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84691 ] 00:17:00.565 [2024-12-06 17:16:42.647580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.565 [2024-12-06 17:16:42.693525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.565 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.565 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:00.565 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.CwO 00:17:00.823 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:01.082 [2024-12-06 17:16:43.280731] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.082 TLSTESTn1 00:17:01.082 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:01.341 Running I/O for 10 seconds... 00:17:03.211 3978.00 IOPS, 15.54 MiB/s [2024-12-06T17:16:46.883Z] 4012.00 IOPS, 15.67 MiB/s [2024-12-06T17:16:47.819Z] 4020.33 IOPS, 15.70 MiB/s [2024-12-06T17:16:48.756Z] 4031.50 IOPS, 15.75 MiB/s [2024-12-06T17:16:49.692Z] 4033.20 IOPS, 15.75 MiB/s [2024-12-06T17:16:50.628Z] 4037.83 IOPS, 15.77 MiB/s [2024-12-06T17:16:51.565Z] 4038.29 IOPS, 15.77 MiB/s [2024-12-06T17:16:52.504Z] 4042.88 IOPS, 15.79 MiB/s [2024-12-06T17:16:53.885Z] 4039.22 IOPS, 15.78 MiB/s [2024-12-06T17:16:53.885Z] 4038.50 IOPS, 15.78 MiB/s 00:17:11.587 Latency(us) 00:17:11.587 [2024-12-06T17:16:53.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.587 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:11.587 Verification LBA range: start 0x0 length 0x2000 00:17:11.587 TLSTESTn1 : 10.02 4044.59 15.80 0.00 0.00 31588.34 5868.45 25976.09 00:17:11.587 [2024-12-06T17:16:53.885Z] =================================================================================================================== 00:17:11.587 [2024-12-06T17:16:53.885Z] Total : 4044.59 15.80 0.00 0.00 31588.34 5868.45 25976.09 00:17:11.587 { 00:17:11.587 "results": [ 00:17:11.587 { 00:17:11.587 "job": "TLSTESTn1", 00:17:11.587 "core_mask": "0x4", 00:17:11.587 "workload": "verify", 00:17:11.587 "status": "finished", 00:17:11.587 "verify_range": { 00:17:11.587 "start": 0, 00:17:11.587 "length": 8192 00:17:11.587 }, 00:17:11.587 "queue_depth": 128, 00:17:11.587 "io_size": 4096, 00:17:11.587 "runtime": 10.016097, 00:17:11.587 "iops": 4044.5894244035376, 00:17:11.587 "mibps": 15.799177439076319, 00:17:11.587 "io_failed": 0, 00:17:11.587 "io_timeout": 0, 00:17:11.587 "avg_latency_us": 31588.342881865985, 00:17:11.587 "min_latency_us": 5868.450909090909, 00:17:11.587 "max_latency_us": 25976.087272727273 00:17:11.587 } 00:17:11.587 ], 00:17:11.587 "core_count": 1 00:17:11.587 } 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:11.587 nvmf_trace.0 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84691 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84691 ']' 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84691 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84691 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:11.587 killing process with pid 84691 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84691' 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84691 00:17:11.587 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.587 00:17:11.587 Latency(us) 00:17:11.587 [2024-12-06T17:16:53.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.587 [2024-12-06T17:16:53.885Z] =================================================================================================================== 00:17:11.587 [2024-12-06T17:16:53.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84691 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:11.587 rmmod nvme_tcp 00:17:11.587 rmmod nvme_fabrics 00:17:11.587 rmmod nvme_keyring 00:17:11.587 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 84650 ']' 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 84650 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 84650 ']' 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 84650 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84650 00:17:11.854 killing process with pid 84650 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84650' 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 84650 00:17:11.854 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 84650 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:11.854 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.CwO 00:17:12.113 00:17:12.113 real 0m13.433s 00:17:12.113 user 0m18.494s 00:17:12.113 sys 0m5.385s 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:12.113 ************************************ 00:17:12.113 END TEST nvmf_fips 00:17:12.113 ************************************ 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.113 ************************************ 00:17:12.113 START TEST nvmf_control_msg_list 00:17:12.113 ************************************ 00:17:12.113 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:12.372 * Looking for test storage... 00:17:12.372 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.372 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:12.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.373 --rc genhtml_branch_coverage=1 00:17:12.373 --rc genhtml_function_coverage=1 00:17:12.373 --rc genhtml_legend=1 00:17:12.373 --rc geninfo_all_blocks=1 00:17:12.373 --rc geninfo_unexecuted_blocks=1 00:17:12.373 00:17:12.373 ' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:12.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.373 --rc genhtml_branch_coverage=1 00:17:12.373 --rc genhtml_function_coverage=1 00:17:12.373 --rc genhtml_legend=1 00:17:12.373 --rc geninfo_all_blocks=1 00:17:12.373 --rc geninfo_unexecuted_blocks=1 00:17:12.373 00:17:12.373 ' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:12.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.373 --rc genhtml_branch_coverage=1 00:17:12.373 --rc genhtml_function_coverage=1 00:17:12.373 --rc genhtml_legend=1 00:17:12.373 --rc geninfo_all_blocks=1 00:17:12.373 --rc geninfo_unexecuted_blocks=1 00:17:12.373 00:17:12.373 ' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:12.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.373 --rc genhtml_branch_coverage=1 00:17:12.373 --rc genhtml_function_coverage=1 00:17:12.373 --rc genhtml_legend=1 00:17:12.373 --rc geninfo_all_blocks=1 00:17:12.373 --rc geninfo_unexecuted_blocks=1 00:17:12.373 00:17:12.373 ' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.373 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:12.374 Cannot find device "nvmf_init_br" 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:12.374 Cannot find device "nvmf_init_br2" 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:12.374 Cannot find device "nvmf_tgt_br" 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.374 Cannot find device "nvmf_tgt_br2" 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:12.374 Cannot find device "nvmf_init_br" 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:12.374 Cannot find device "nvmf_init_br2" 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:12.374 Cannot find device "nvmf_tgt_br" 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:17:12.374 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:12.632 Cannot find device "nvmf_tgt_br2" 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:12.632 Cannot find device "nvmf_br" 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:12.632 Cannot find device "nvmf_init_if" 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:12.632 Cannot find device "nvmf_init_if2" 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:12.632 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:12.633 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:12.633 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.116 ms 00:17:12.633 00:17:12.633 --- 10.0.0.3 ping statistics --- 00:17:12.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.633 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:12.633 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:12.633 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:17:12.633 00:17:12.633 --- 10.0.0.4 ping statistics --- 00:17:12.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.633 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:12.633 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:12.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:17:12.893 00:17:12.893 --- 10.0.0.1 ping statistics --- 00:17:12.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.893 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:12.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:17:12.893 00:17:12.893 --- 10.0.0.2 ping statistics --- 00:17:12.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.893 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85088 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85088 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85088 ']' 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.893 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:12.893 [2024-12-06 17:16:55.012127] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:12.893 [2024-12-06 17:16:55.012221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.893 [2024-12-06 17:16:55.158640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.153 [2024-12-06 17:16:55.189641] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.153 [2024-12-06 17:16:55.189693] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.153 [2024-12-06 17:16:55.189704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.153 [2024-12-06 17:16:55.189712] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.153 [2024-12-06 17:16:55.189719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.153 [2024-12-06 17:16:55.190017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:13.153 [2024-12-06 17:16:55.319023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:13.153 Malloc0 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:13.153 [2024-12-06 17:16:55.353349] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85119 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85120 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85121 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:13.153 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85119 00:17:13.411 [2024-12-06 17:16:55.531601] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:13.411 [2024-12-06 17:16:55.542210] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:13.411 [2024-12-06 17:16:55.542476] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:14.345 Initializing NVMe Controllers 00:17:14.345 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:14.345 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:14.345 Initialization complete. Launching workers. 00:17:14.345 ======================================================== 00:17:14.345 Latency(us) 00:17:14.345 Device Information : IOPS MiB/s Average min max 00:17:14.345 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3280.00 12.81 304.51 129.81 680.17 00:17:14.345 ======================================================== 00:17:14.345 Total : 3280.00 12.81 304.51 129.81 680.17 00:17:14.345 00:17:14.345 Initializing NVMe Controllers 00:17:14.345 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:14.345 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:14.345 Initialization complete. Launching workers. 00:17:14.345 ======================================================== 00:17:14.345 Latency(us) 00:17:14.345 Device Information : IOPS MiB/s Average min max 00:17:14.345 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3253.00 12.71 307.06 208.98 835.91 00:17:14.345 ======================================================== 00:17:14.345 Total : 3253.00 12.71 307.06 208.98 835.91 00:17:14.345 00:17:14.345 Initializing NVMe Controllers 00:17:14.345 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:14.345 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:14.345 Initialization complete. Launching workers. 00:17:14.345 ======================================================== 00:17:14.345 Latency(us) 00:17:14.346 Device Information : IOPS MiB/s Average min max 00:17:14.346 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3261.00 12.74 306.28 209.18 623.76 00:17:14.346 ======================================================== 00:17:14.346 Total : 3261.00 12.74 306.28 209.18 623.76 00:17:14.346 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85120 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85121 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:14.346 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:14.346 rmmod nvme_tcp 00:17:14.604 rmmod nvme_fabrics 00:17:14.604 rmmod nvme_keyring 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85088 ']' 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85088 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85088 ']' 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85088 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85088 00:17:14.604 killing process with pid 85088 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85088' 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85088 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85088 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:14.604 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:14.865 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:14.865 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:14.865 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:14.865 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:14.865 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:14.865 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:14.865 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:14.865 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:17:14.865 00:17:14.865 real 0m2.718s 00:17:14.865 user 0m4.606s 00:17:14.865 sys 0m1.239s 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:14.865 ************************************ 00:17:14.865 END TEST nvmf_control_msg_list 00:17:14.865 ************************************ 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.865 ************************************ 00:17:14.865 START TEST nvmf_wait_for_buf 00:17:14.865 ************************************ 00:17:14.865 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:15.126 * Looking for test storage... 00:17:15.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:15.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.127 --rc genhtml_branch_coverage=1 00:17:15.127 --rc genhtml_function_coverage=1 00:17:15.127 --rc genhtml_legend=1 00:17:15.127 --rc geninfo_all_blocks=1 00:17:15.127 --rc geninfo_unexecuted_blocks=1 00:17:15.127 00:17:15.127 ' 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:15.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.127 --rc genhtml_branch_coverage=1 00:17:15.127 --rc genhtml_function_coverage=1 00:17:15.127 --rc genhtml_legend=1 00:17:15.127 --rc geninfo_all_blocks=1 00:17:15.127 --rc geninfo_unexecuted_blocks=1 00:17:15.127 00:17:15.127 ' 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:15.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.127 --rc genhtml_branch_coverage=1 00:17:15.127 --rc genhtml_function_coverage=1 00:17:15.127 --rc genhtml_legend=1 00:17:15.127 --rc geninfo_all_blocks=1 00:17:15.127 --rc geninfo_unexecuted_blocks=1 00:17:15.127 00:17:15.127 ' 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:15.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:15.127 --rc genhtml_branch_coverage=1 00:17:15.127 --rc genhtml_function_coverage=1 00:17:15.127 --rc genhtml_legend=1 00:17:15.127 --rc geninfo_all_blocks=1 00:17:15.127 --rc geninfo_unexecuted_blocks=1 00:17:15.127 00:17:15.127 ' 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.127 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:15.128 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:15.128 Cannot find device "nvmf_init_br" 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:15.128 Cannot find device "nvmf_init_br2" 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:15.128 Cannot find device "nvmf_tgt_br" 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:15.128 Cannot find device "nvmf_tgt_br2" 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:15.128 Cannot find device "nvmf_init_br" 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:15.128 Cannot find device "nvmf_init_br2" 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:17:15.128 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:15.387 Cannot find device "nvmf_tgt_br" 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:15.387 Cannot find device "nvmf_tgt_br2" 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:15.387 Cannot find device "nvmf_br" 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:15.387 Cannot find device "nvmf_init_if" 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:15.387 Cannot find device "nvmf_init_if2" 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:15.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:15.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:15.387 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:15.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:15.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:17:15.646 00:17:15.646 --- 10.0.0.3 ping statistics --- 00:17:15.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.646 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:15.646 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:15.646 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:17:15.646 00:17:15.646 --- 10.0.0.4 ping statistics --- 00:17:15.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.646 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:15.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:15.646 00:17:15.646 --- 10.0.0.1 ping statistics --- 00:17:15.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.646 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:15.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:17:15.646 00:17:15.646 --- 10.0.0.2 ping statistics --- 00:17:15.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.646 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=85363 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 85363 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 85363 ']' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:15.646 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:15.646 [2024-12-06 17:16:57.855386] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:15.646 [2024-12-06 17:16:57.856210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.905 [2024-12-06 17:16:58.012098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.905 [2024-12-06 17:16:58.049411] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.905 [2024-12-06 17:16:58.049629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.905 [2024-12-06 17:16:58.049789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.905 [2024-12-06 17:16:58.049938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.905 [2024-12-06 17:16:58.049981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.905 [2024-12-06 17:16:58.050560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.841 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:16.842 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.842 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.842 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.842 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:16.842 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.842 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.842 Malloc0 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.842 [2024-12-06 17:16:59.012743] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:16.842 [2024-12-06 17:16:59.036858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.842 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:17.101 [2024-12-06 17:16:59.239434] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:18.475 Initializing NVMe Controllers 00:17:18.475 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:17:18.475 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:18.475 Initialization complete. Launching workers. 00:17:18.475 ======================================================== 00:17:18.475 Latency(us) 00:17:18.475 Device Information : IOPS MiB/s Average min max 00:17:18.475 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32308.04 8035.59 65695.19 00:17:18.475 ======================================================== 00:17:18.475 Total : 129.00 16.12 32308.04 8035.59 65695.19 00:17:18.475 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:18.475 rmmod nvme_tcp 00:17:18.475 rmmod nvme_fabrics 00:17:18.475 rmmod nvme_keyring 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 85363 ']' 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 85363 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 85363 ']' 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 85363 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85363 00:17:18.475 killing process with pid 85363 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85363' 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 85363 00:17:18.475 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 85363 00:17:18.733 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:18.733 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:18.733 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:18.733 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:18.733 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:18.733 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:18.734 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:18.734 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:18.734 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:17:18.992 00:17:18.992 real 0m4.044s 00:17:18.992 user 0m3.634s 00:17:18.992 sys 0m0.731s 00:17:18.992 ************************************ 00:17:18.992 END TEST nvmf_wait_for_buf 00:17:18.992 ************************************ 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:18.992 ************************************ 00:17:18.992 START TEST nvmf_nsid 00:17:18.992 ************************************ 00:17:18.992 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:19.251 * Looking for test storage... 00:17:19.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.251 --rc genhtml_branch_coverage=1 00:17:19.251 --rc genhtml_function_coverage=1 00:17:19.251 --rc genhtml_legend=1 00:17:19.251 --rc geninfo_all_blocks=1 00:17:19.251 --rc geninfo_unexecuted_blocks=1 00:17:19.251 00:17:19.251 ' 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.251 --rc genhtml_branch_coverage=1 00:17:19.251 --rc genhtml_function_coverage=1 00:17:19.251 --rc genhtml_legend=1 00:17:19.251 --rc geninfo_all_blocks=1 00:17:19.251 --rc geninfo_unexecuted_blocks=1 00:17:19.251 00:17:19.251 ' 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.251 --rc genhtml_branch_coverage=1 00:17:19.251 --rc genhtml_function_coverage=1 00:17:19.251 --rc genhtml_legend=1 00:17:19.251 --rc geninfo_all_blocks=1 00:17:19.251 --rc geninfo_unexecuted_blocks=1 00:17:19.251 00:17:19.251 ' 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:19.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.251 --rc genhtml_branch_coverage=1 00:17:19.251 --rc genhtml_function_coverage=1 00:17:19.251 --rc genhtml_legend=1 00:17:19.251 --rc geninfo_all_blocks=1 00:17:19.251 --rc geninfo_unexecuted_blocks=1 00:17:19.251 00:17:19.251 ' 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.251 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:19.252 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:19.252 Cannot find device "nvmf_init_br" 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:19.252 Cannot find device "nvmf_init_br2" 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:19.252 Cannot find device "nvmf_tgt_br" 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:19.252 Cannot find device "nvmf_tgt_br2" 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:19.252 Cannot find device "nvmf_init_br" 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:19.252 Cannot find device "nvmf_init_br2" 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:19.252 Cannot find device "nvmf_tgt_br" 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:19.252 Cannot find device "nvmf_tgt_br2" 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:19.252 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:19.510 Cannot find device "nvmf_br" 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:19.510 Cannot find device "nvmf_init_if" 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:19.510 Cannot find device "nvmf_init_if2" 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:19.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:19.510 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:19.510 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:19.511 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:19.511 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:19.511 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:19.511 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:19.511 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:19.511 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:19.511 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:19.769 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:19.769 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:17:19.769 00:17:19.769 --- 10.0.0.3 ping statistics --- 00:17:19.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.769 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:19.769 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:19.769 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:17:19.769 00:17:19.769 --- 10.0.0.4 ping statistics --- 00:17:19.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.769 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:19.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:17:19.769 00:17:19.769 --- 10.0.0.1 ping statistics --- 00:17:19.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.769 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:19.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:17:19.769 00:17:19.769 --- 10.0.0.2 ping statistics --- 00:17:19.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.769 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=85653 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 85653 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85653 ']' 00:17:19.769 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.770 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.770 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.770 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.770 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:19.770 [2024-12-06 17:17:01.972846] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:19.770 [2024-12-06 17:17:01.972948] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.032 [2024-12-06 17:17:02.121094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.032 [2024-12-06 17:17:02.160189] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.032 [2024-12-06 17:17:02.160257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.032 [2024-12-06 17:17:02.160288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.032 [2024-12-06 17:17:02.160299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.032 [2024-12-06 17:17:02.160308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.032 [2024-12-06 17:17:02.160675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=85678 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=0f064192-2f23-4f07-81ac-84a77bea847b 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a4b6eefc-06dd-40f4-9780-7eda92545b2f 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1e27b6f0-a7c9-4110-b838-90bf86235933 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.032 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:20.291 null0 00:17:20.291 null1 00:17:20.291 null2 00:17:20.291 [2024-12-06 17:17:02.353253] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.292 [2024-12-06 17:17:02.377423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:20.292 [2024-12-06 17:17:02.392209] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:20.292 [2024-12-06 17:17:02.392723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85678 ] 00:17:20.292 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.292 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 85678 /var/tmp/tgt2.sock 00:17:20.292 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 85678 ']' 00:17:20.292 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:20.292 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:20.292 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:20.292 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.292 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:20.292 [2024-12-06 17:17:02.547561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.292 [2024-12-06 17:17:02.586857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.550 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.550 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:20.550 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:21.117 [2024-12-06 17:17:03.239277] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.117 [2024-12-06 17:17:03.255393] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:21.117 nvme0n1 nvme0n2 00:17:21.117 nvme1n1 00:17:21.117 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:21.117 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:21.117 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:21.374 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 0f064192-2f23-4f07-81ac-84a77bea847b 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0f0641922f234f0781ac84a77bea847b 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0F0641922F234F0781AC84A77BEA847B 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 0F0641922F234F0781AC84A77BEA847B == \0\F\0\6\4\1\9\2\2\F\2\3\4\F\0\7\8\1\A\C\8\4\A\7\7\B\E\A\8\4\7\B ]] 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a4b6eefc-06dd-40f4-9780-7eda92545b2f 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a4b6eefc06dd40f497807eda92545b2f 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A4B6EEFC06DD40F497807EDA92545B2F 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A4B6EEFC06DD40F497807EDA92545B2F == \A\4\B\6\E\E\F\C\0\6\D\D\4\0\F\4\9\7\8\0\7\E\D\A\9\2\5\4\5\B\2\F ]] 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:22.308 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1e27b6f0-a7c9-4110-b838-90bf86235933 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1e27b6f0a7c94110b83890bf86235933 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1E27B6F0A7C94110B83890BF86235933 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1E27B6F0A7C94110B83890BF86235933 == \1\E\2\7\B\6\F\0\A\7\C\9\4\1\1\0\B\8\3\8\9\0\B\F\8\6\2\3\5\9\3\3 ]] 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 85678 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85678 ']' 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85678 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85678 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:22.566 killing process with pid 85678 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85678' 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85678 00:17:22.566 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85678 00:17:22.825 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:22.825 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:22.825 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.083 rmmod nvme_tcp 00:17:23.083 rmmod nvme_fabrics 00:17:23.083 rmmod nvme_keyring 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 85653 ']' 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 85653 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 85653 ']' 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 85653 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85653 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.083 killing process with pid 85653 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85653' 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 85653 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 85653 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:23.083 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:23.342 ************************************ 00:17:23.342 END TEST nvmf_nsid 00:17:23.342 ************************************ 00:17:23.342 00:17:23.342 real 0m4.354s 00:17:23.342 user 0m6.940s 00:17:23.342 sys 0m1.152s 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:23.342 00:17:23.342 real 7m24.559s 00:17:23.342 user 18m0.084s 00:17:23.342 sys 1m24.355s 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.342 17:17:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.342 ************************************ 00:17:23.342 END TEST nvmf_target_extra 00:17:23.342 ************************************ 00:17:23.600 17:17:05 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:23.600 17:17:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.600 17:17:05 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.600 17:17:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:23.600 ************************************ 00:17:23.600 START TEST nvmf_host 00:17:23.600 ************************************ 00:17:23.600 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:23.600 * Looking for test storage... 00:17:23.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:23.600 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:23.600 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:23.600 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.601 --rc genhtml_branch_coverage=1 00:17:23.601 --rc genhtml_function_coverage=1 00:17:23.601 --rc genhtml_legend=1 00:17:23.601 --rc geninfo_all_blocks=1 00:17:23.601 --rc geninfo_unexecuted_blocks=1 00:17:23.601 00:17:23.601 ' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.601 --rc genhtml_branch_coverage=1 00:17:23.601 --rc genhtml_function_coverage=1 00:17:23.601 --rc genhtml_legend=1 00:17:23.601 --rc geninfo_all_blocks=1 00:17:23.601 --rc geninfo_unexecuted_blocks=1 00:17:23.601 00:17:23.601 ' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.601 --rc genhtml_branch_coverage=1 00:17:23.601 --rc genhtml_function_coverage=1 00:17:23.601 --rc genhtml_legend=1 00:17:23.601 --rc geninfo_all_blocks=1 00:17:23.601 --rc geninfo_unexecuted_blocks=1 00:17:23.601 00:17:23.601 ' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:23.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.601 --rc genhtml_branch_coverage=1 00:17:23.601 --rc genhtml_function_coverage=1 00:17:23.601 --rc genhtml_legend=1 00:17:23.601 --rc geninfo_all_blocks=1 00:17:23.601 --rc geninfo_unexecuted_blocks=1 00:17:23.601 00:17:23.601 ' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.601 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:23.601 ************************************ 00:17:23.601 START TEST nvmf_multicontroller 00:17:23.601 ************************************ 00:17:23.601 17:17:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:23.860 * Looking for test storage... 00:17:23.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:23.860 17:17:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:23.860 17:17:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:17:23.860 17:17:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:17:23.860 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:23.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.861 --rc genhtml_branch_coverage=1 00:17:23.861 --rc genhtml_function_coverage=1 00:17:23.861 --rc genhtml_legend=1 00:17:23.861 --rc geninfo_all_blocks=1 00:17:23.861 --rc geninfo_unexecuted_blocks=1 00:17:23.861 00:17:23.861 ' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:23.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.861 --rc genhtml_branch_coverage=1 00:17:23.861 --rc genhtml_function_coverage=1 00:17:23.861 --rc genhtml_legend=1 00:17:23.861 --rc geninfo_all_blocks=1 00:17:23.861 --rc geninfo_unexecuted_blocks=1 00:17:23.861 00:17:23.861 ' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:23.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.861 --rc genhtml_branch_coverage=1 00:17:23.861 --rc genhtml_function_coverage=1 00:17:23.861 --rc genhtml_legend=1 00:17:23.861 --rc geninfo_all_blocks=1 00:17:23.861 --rc geninfo_unexecuted_blocks=1 00:17:23.861 00:17:23.861 ' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:23.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:23.861 --rc genhtml_branch_coverage=1 00:17:23.861 --rc genhtml_function_coverage=1 00:17:23.861 --rc genhtml_legend=1 00:17:23.861 --rc geninfo_all_blocks=1 00:17:23.861 --rc geninfo_unexecuted_blocks=1 00:17:23.861 00:17:23.861 ' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:23.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:23.861 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:23.862 Cannot find device "nvmf_init_br" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:23.862 Cannot find device "nvmf_init_br2" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:23.862 Cannot find device "nvmf_tgt_br" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:23.862 Cannot find device "nvmf_tgt_br2" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:23.862 Cannot find device "nvmf_init_br" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:23.862 Cannot find device "nvmf_init_br2" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:23.862 Cannot find device "nvmf_tgt_br" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:23.862 Cannot find device "nvmf_tgt_br2" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:23.862 Cannot find device "nvmf_br" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:23.862 Cannot find device "nvmf_init_if" 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:17:23.862 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:24.120 Cannot find device "nvmf_init_if2" 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:24.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:24.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:24.120 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:24.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:24.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:17:24.379 00:17:24.379 --- 10.0.0.3 ping statistics --- 00:17:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.379 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:24.379 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:24.379 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:17:24.379 00:17:24.379 --- 10.0.0.4 ping statistics --- 00:17:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.379 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:24.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:17:24.379 00:17:24.379 --- 10.0.0.1 ping statistics --- 00:17:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.379 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:24.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:24.379 00:17:24.379 --- 10.0.0.2 ping statistics --- 00:17:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.379 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.379 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=86050 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 86050 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86050 ']' 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.380 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.380 [2024-12-06 17:17:06.547226] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:24.380 [2024-12-06 17:17:06.547393] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.638 [2024-12-06 17:17:06.704219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:24.638 [2024-12-06 17:17:06.745258] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.638 [2024-12-06 17:17:06.745352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.638 [2024-12-06 17:17:06.745368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.638 [2024-12-06 17:17:06.745378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.638 [2024-12-06 17:17:06.745387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.638 [2024-12-06 17:17:06.749310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.638 [2024-12-06 17:17:06.749436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.638 [2024-12-06 17:17:06.749444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.638 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.638 [2024-12-06 17:17:06.928031] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.897 Malloc0 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.897 [2024-12-06 17:17:06.986788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.897 [2024-12-06 17:17:06.994703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.897 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:24.898 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.898 17:17:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.898 Malloc1 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=86089 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 86089 /var/tmp/bdevperf.sock 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 86089 ']' 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.898 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.156 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.156 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:17:25.156 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:17:25.156 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.156 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.415 NVMe0n1 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.415 1 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.415 2024/12/06 17:17:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:25.415 request: 00:17:25.415 { 00:17:25.415 "method": "bdev_nvme_attach_controller", 00:17:25.415 "params": { 00:17:25.415 "name": "NVMe0", 00:17:25.415 "trtype": "tcp", 00:17:25.415 "traddr": "10.0.0.3", 00:17:25.415 "adrfam": "ipv4", 00:17:25.415 "trsvcid": "4420", 00:17:25.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.415 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:25.415 "hostaddr": "10.0.0.1", 00:17:25.415 "prchk_reftag": false, 00:17:25.415 "prchk_guard": false, 00:17:25.415 "hdgst": false, 00:17:25.415 "ddgst": false, 00:17:25.415 "allow_unrecognized_csi": false 00:17:25.415 } 00:17:25.415 } 00:17:25.415 Got JSON-RPC error response 00:17:25.415 GoRPCClient: error on JSON-RPC call 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.415 2024/12/06 17:17:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:25.415 request: 00:17:25.415 { 00:17:25.415 "method": "bdev_nvme_attach_controller", 00:17:25.415 "params": { 00:17:25.415 "name": "NVMe0", 00:17:25.415 "trtype": "tcp", 00:17:25.415 "traddr": "10.0.0.3", 00:17:25.415 "adrfam": "ipv4", 00:17:25.415 "trsvcid": "4420", 00:17:25.415 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:25.415 "hostaddr": "10.0.0.1", 00:17:25.415 "prchk_reftag": false, 00:17:25.415 "prchk_guard": false, 00:17:25.415 "hdgst": false, 00:17:25.415 "ddgst": false, 00:17:25.415 "allow_unrecognized_csi": false 00:17:25.415 } 00:17:25.415 } 00:17:25.415 Got JSON-RPC error response 00:17:25.415 GoRPCClient: error on JSON-RPC call 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.415 2024/12/06 17:17:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:17:25.415 request: 00:17:25.415 { 00:17:25.415 "method": "bdev_nvme_attach_controller", 00:17:25.415 "params": { 00:17:25.415 "name": "NVMe0", 00:17:25.415 "trtype": "tcp", 00:17:25.415 "traddr": "10.0.0.3", 00:17:25.415 "adrfam": "ipv4", 00:17:25.415 "trsvcid": "4420", 00:17:25.415 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.415 "hostaddr": "10.0.0.1", 00:17:25.415 "prchk_reftag": false, 00:17:25.415 "prchk_guard": false, 00:17:25.415 "hdgst": false, 00:17:25.415 "ddgst": false, 00:17:25.415 "multipath": "disable", 00:17:25.415 "allow_unrecognized_csi": false 00:17:25.415 } 00:17:25.415 } 00:17:25.415 Got JSON-RPC error response 00:17:25.415 GoRPCClient: error on JSON-RPC call 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:25.415 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.416 2024/12/06 17:17:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:17:25.416 request: 00:17:25.416 { 00:17:25.416 "method": "bdev_nvme_attach_controller", 00:17:25.416 "params": { 00:17:25.416 "name": "NVMe0", 00:17:25.416 "trtype": "tcp", 00:17:25.416 "traddr": "10.0.0.3", 00:17:25.416 "adrfam": "ipv4", 00:17:25.416 "trsvcid": "4420", 00:17:25.416 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.416 "hostaddr": "10.0.0.1", 00:17:25.416 "prchk_reftag": false, 00:17:25.416 "prchk_guard": false, 00:17:25.416 "hdgst": false, 00:17:25.416 "ddgst": false, 00:17:25.416 "multipath": "failover", 00:17:25.416 "allow_unrecognized_csi": false 00:17:25.416 } 00:17:25.416 } 00:17:25.416 Got JSON-RPC error response 00:17:25.416 GoRPCClient: error on JSON-RPC call 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.416 NVMe0n1 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.416 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:25.416 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:25.674 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.674 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:25.674 17:17:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:26.608 { 00:17:26.608 "results": [ 00:17:26.608 { 00:17:26.608 "job": "NVMe0n1", 00:17:26.608 "core_mask": "0x1", 00:17:26.608 "workload": "write", 00:17:26.608 "status": "finished", 00:17:26.608 "queue_depth": 128, 00:17:26.608 "io_size": 4096, 00:17:26.608 "runtime": 1.009375, 00:17:26.608 "iops": 18397.52321981424, 00:17:26.608 "mibps": 71.86532507739938, 00:17:26.608 "io_failed": 0, 00:17:26.608 "io_timeout": 0, 00:17:26.608 "avg_latency_us": 6946.338363930093, 00:17:26.608 "min_latency_us": 2829.9636363636364, 00:17:26.608 "max_latency_us": 17277.672727272726 00:17:26.608 } 00:17:26.608 ], 00:17:26.608 "core_count": 1 00:17:26.608 } 00:17:26.608 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:26.608 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.608 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.608 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.608 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:17:26.608 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:17:26.608 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.608 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.868 nvme1n1 00:17:26.868 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.868 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:17:26.868 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:17:26.868 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.868 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.868 17:17:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.868 nvme1n1 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 86089 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86089 ']' 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86089 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:17:26.868 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86089 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.126 killing process with pid 86089 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86089' 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86089 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86089 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:27.126 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:17:27.127 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:27.127 [2024-12-06 17:17:07.116097] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:27.127 [2024-12-06 17:17:07.116211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86089 ] 00:17:27.127 [2024-12-06 17:17:07.268083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.127 [2024-12-06 17:17:07.307276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.127 [2024-12-06 17:17:07.705566] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 76182999-1501-483a-8d7a-a71593815576 already exists 00:17:27.127 [2024-12-06 17:17:07.705848] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:76182999-1501-483a-8d7a-a71593815576 alias for bdev NVMe1n1 00:17:27.127 [2024-12-06 17:17:07.705952] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:27.127 Running I/O for 1 seconds... 00:17:27.127 18377.00 IOPS, 71.79 MiB/s 00:17:27.127 Latency(us) 00:17:27.127 [2024-12-06T17:17:09.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.127 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:27.127 NVMe0n1 : 1.01 18397.52 71.87 0.00 0.00 6946.34 2829.96 17277.67 00:17:27.127 [2024-12-06T17:17:09.425Z] =================================================================================================================== 00:17:27.127 [2024-12-06T17:17:09.425Z] Total : 18397.52 71.87 0.00 0.00 6946.34 2829.96 17277.67 00:17:27.127 Received shutdown signal, test time was about 1.000000 seconds 00:17:27.127 00:17:27.127 Latency(us) 00:17:27.127 [2024-12-06T17:17:09.425Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.127 [2024-12-06T17:17:09.425Z] =================================================================================================================== 00:17:27.127 [2024-12-06T17:17:09.425Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.127 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.127 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.127 rmmod nvme_tcp 00:17:27.385 rmmod nvme_fabrics 00:17:27.385 rmmod nvme_keyring 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 86050 ']' 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 86050 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 86050 ']' 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 86050 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86050 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:27.385 killing process with pid 86050 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86050' 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 86050 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 86050 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:27.385 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:17:27.643 00:17:27.643 real 0m4.045s 00:17:27.643 user 0m11.520s 00:17:27.643 sys 0m1.030s 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:27.643 ************************************ 00:17:27.643 END TEST nvmf_multicontroller 00:17:27.643 ************************************ 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.643 17:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:27.903 ************************************ 00:17:27.903 START TEST nvmf_aer 00:17:27.903 ************************************ 00:17:27.903 17:17:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:27.903 * Looking for test storage... 00:17:27.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:27.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.903 --rc genhtml_branch_coverage=1 00:17:27.903 --rc genhtml_function_coverage=1 00:17:27.903 --rc genhtml_legend=1 00:17:27.903 --rc geninfo_all_blocks=1 00:17:27.903 --rc geninfo_unexecuted_blocks=1 00:17:27.903 00:17:27.903 ' 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:27.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.903 --rc genhtml_branch_coverage=1 00:17:27.903 --rc genhtml_function_coverage=1 00:17:27.903 --rc genhtml_legend=1 00:17:27.903 --rc geninfo_all_blocks=1 00:17:27.903 --rc geninfo_unexecuted_blocks=1 00:17:27.903 00:17:27.903 ' 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:27.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.903 --rc genhtml_branch_coverage=1 00:17:27.903 --rc genhtml_function_coverage=1 00:17:27.903 --rc genhtml_legend=1 00:17:27.903 --rc geninfo_all_blocks=1 00:17:27.903 --rc geninfo_unexecuted_blocks=1 00:17:27.903 00:17:27.903 ' 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:27.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.903 --rc genhtml_branch_coverage=1 00:17:27.903 --rc genhtml_function_coverage=1 00:17:27.903 --rc genhtml_legend=1 00:17:27.903 --rc geninfo_all_blocks=1 00:17:27.903 --rc geninfo_unexecuted_blocks=1 00:17:27.903 00:17:27.903 ' 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.903 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.904 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:27.904 Cannot find device "nvmf_init_br" 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:27.904 Cannot find device "nvmf_init_br2" 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:27.904 Cannot find device "nvmf_tgt_br" 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:17:27.904 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.163 Cannot find device "nvmf_tgt_br2" 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:28.163 Cannot find device "nvmf_init_br" 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:28.163 Cannot find device "nvmf_init_br2" 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:28.163 Cannot find device "nvmf_tgt_br" 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:28.163 Cannot find device "nvmf_tgt_br2" 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:28.163 Cannot find device "nvmf_br" 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:28.163 Cannot find device "nvmf_init_if" 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:28.163 Cannot find device "nvmf_init_if2" 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.163 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:28.163 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:28.164 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:28.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:17:28.423 00:17:28.423 --- 10.0.0.3 ping statistics --- 00:17:28.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.423 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:28.423 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:28.423 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:17:28.423 00:17:28.423 --- 10.0.0.4 ping statistics --- 00:17:28.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.423 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:28.423 00:17:28.423 --- 10.0.0.1 ping statistics --- 00:17:28.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.423 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:28.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:17:28.423 00:17:28.423 --- 10.0.0.2 ping statistics --- 00:17:28.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.423 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:28.423 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=86358 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 86358 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 86358 ']' 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.424 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.424 [2024-12-06 17:17:10.627437] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:28.424 [2024-12-06 17:17:10.627675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.683 [2024-12-06 17:17:10.776830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.683 [2024-12-06 17:17:10.816756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.683 [2024-12-06 17:17:10.817057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.683 [2024-12-06 17:17:10.817214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.683 [2024-12-06 17:17:10.817404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.683 [2024-12-06 17:17:10.817458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.683 [2024-12-06 17:17:10.818469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:28.683 [2024-12-06 17:17:10.818591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.683 [2024-12-06 17:17:10.818674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.683 [2024-12-06 17:17:10.818673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.683 [2024-12-06 17:17:10.954015] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.683 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.942 Malloc0 00:17:28.942 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.942 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:28.942 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.942 17:17:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.942 [2024-12-06 17:17:11.024211] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.942 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:28.942 [ 00:17:28.942 { 00:17:28.942 "allow_any_host": true, 00:17:28.942 "hosts": [], 00:17:28.942 "listen_addresses": [], 00:17:28.942 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:28.942 "subtype": "Discovery" 00:17:28.942 }, 00:17:28.942 { 00:17:28.942 "allow_any_host": true, 00:17:28.942 "hosts": [], 00:17:28.942 "listen_addresses": [ 00:17:28.942 { 00:17:28.942 "adrfam": "IPv4", 00:17:28.942 "traddr": "10.0.0.3", 00:17:28.943 "trsvcid": "4420", 00:17:28.943 "trtype": "TCP" 00:17:28.943 } 00:17:28.943 ], 00:17:28.943 "max_cntlid": 65519, 00:17:28.943 "max_namespaces": 2, 00:17:28.943 "min_cntlid": 1, 00:17:28.943 "model_number": "SPDK bdev Controller", 00:17:28.943 "namespaces": [ 00:17:28.943 { 00:17:28.943 "bdev_name": "Malloc0", 00:17:28.943 "name": "Malloc0", 00:17:28.943 "nguid": "F87B85A1612147C2A85638899C62C7E8", 00:17:28.943 "nsid": 1, 00:17:28.943 "uuid": "f87b85a1-6121-47c2-a856-38899c62c7e8" 00:17:28.943 } 00:17:28.943 ], 00:17:28.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.943 "serial_number": "SPDK00000000000001", 00:17:28.943 "subtype": "NVMe" 00:17:28.943 } 00:17:28.943 ] 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=86403 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:17:28.943 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.202 Malloc1 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.202 [ 00:17:29.202 { 00:17:29.202 "allow_any_host": true, 00:17:29.202 "hosts": [], 00:17:29.202 "listen_addresses": [], 00:17:29.202 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:29.202 Asynchronous Event Request test 00:17:29.202 Attaching to 10.0.0.3 00:17:29.202 Attached to 10.0.0.3 00:17:29.202 Registering asynchronous event callbacks... 00:17:29.202 Starting namespace attribute notice tests for all controllers... 00:17:29.202 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:29.202 aer_cb - Changed Namespace 00:17:29.202 Cleaning up... 00:17:29.202 "subtype": "Discovery" 00:17:29.202 }, 00:17:29.202 { 00:17:29.202 "allow_any_host": true, 00:17:29.202 "hosts": [], 00:17:29.202 "listen_addresses": [ 00:17:29.202 { 00:17:29.202 "adrfam": "IPv4", 00:17:29.202 "traddr": "10.0.0.3", 00:17:29.202 "trsvcid": "4420", 00:17:29.202 "trtype": "TCP" 00:17:29.202 } 00:17:29.202 ], 00:17:29.202 "max_cntlid": 65519, 00:17:29.202 "max_namespaces": 2, 00:17:29.202 "min_cntlid": 1, 00:17:29.202 "model_number": "SPDK bdev Controller", 00:17:29.202 "namespaces": [ 00:17:29.202 { 00:17:29.202 "bdev_name": "Malloc0", 00:17:29.202 "name": "Malloc0", 00:17:29.202 "nguid": "F87B85A1612147C2A85638899C62C7E8", 00:17:29.202 "nsid": 1, 00:17:29.202 "uuid": "f87b85a1-6121-47c2-a856-38899c62c7e8" 00:17:29.202 }, 00:17:29.202 { 00:17:29.202 "bdev_name": "Malloc1", 00:17:29.202 "name": "Malloc1", 00:17:29.202 "nguid": "28F2C3C923D54353B5D4F66CC42B512A", 00:17:29.202 "nsid": 2, 00:17:29.202 "uuid": "28f2c3c9-23d5-4353-b5d4-f66cc42b512a" 00:17:29.202 } 00:17:29.202 ], 00:17:29.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.202 "serial_number": "SPDK00000000000001", 00:17:29.202 "subtype": "NVMe" 00:17:29.202 } 00:17:29.202 ] 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 86403 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:29.202 rmmod nvme_tcp 00:17:29.202 rmmod nvme_fabrics 00:17:29.202 rmmod nvme_keyring 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 86358 ']' 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 86358 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 86358 ']' 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 86358 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.202 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86358 00:17:29.462 killing process with pid 86358 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86358' 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 86358 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 86358 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:29.462 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:17:29.721 00:17:29.721 real 0m1.964s 00:17:29.721 user 0m3.666s 00:17:29.721 sys 0m0.659s 00:17:29.721 ************************************ 00:17:29.721 END TEST nvmf_aer 00:17:29.721 ************************************ 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:29.721 ************************************ 00:17:29.721 START TEST nvmf_async_init 00:17:29.721 ************************************ 00:17:29.721 17:17:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:29.981 * Looking for test storage... 00:17:29.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:17:29.981 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:29.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.982 --rc genhtml_branch_coverage=1 00:17:29.982 --rc genhtml_function_coverage=1 00:17:29.982 --rc genhtml_legend=1 00:17:29.982 --rc geninfo_all_blocks=1 00:17:29.982 --rc geninfo_unexecuted_blocks=1 00:17:29.982 00:17:29.982 ' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:29.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.982 --rc genhtml_branch_coverage=1 00:17:29.982 --rc genhtml_function_coverage=1 00:17:29.982 --rc genhtml_legend=1 00:17:29.982 --rc geninfo_all_blocks=1 00:17:29.982 --rc geninfo_unexecuted_blocks=1 00:17:29.982 00:17:29.982 ' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:29.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.982 --rc genhtml_branch_coverage=1 00:17:29.982 --rc genhtml_function_coverage=1 00:17:29.982 --rc genhtml_legend=1 00:17:29.982 --rc geninfo_all_blocks=1 00:17:29.982 --rc geninfo_unexecuted_blocks=1 00:17:29.982 00:17:29.982 ' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:29.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.982 --rc genhtml_branch_coverage=1 00:17:29.982 --rc genhtml_function_coverage=1 00:17:29.982 --rc genhtml_legend=1 00:17:29.982 --rc geninfo_all_blocks=1 00:17:29.982 --rc geninfo_unexecuted_blocks=1 00:17:29.982 00:17:29.982 ' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.982 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=452de1cba12a42e1b8c3b8c1722ba6a8 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:29.982 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:29.983 Cannot find device "nvmf_init_br" 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:29.983 Cannot find device "nvmf_init_br2" 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:29.983 Cannot find device "nvmf_tgt_br" 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:29.983 Cannot find device "nvmf_tgt_br2" 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:29.983 Cannot find device "nvmf_init_br" 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:29.983 Cannot find device "nvmf_init_br2" 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:29.983 Cannot find device "nvmf_tgt_br" 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:29.983 Cannot find device "nvmf_tgt_br2" 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:17:29.983 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:30.242 Cannot find device "nvmf_br" 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:30.242 Cannot find device "nvmf_init_if" 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:30.242 Cannot find device "nvmf_init_if2" 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:30.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:30.242 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:30.242 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:30.501 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:30.501 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:17:30.501 00:17:30.501 --- 10.0.0.3 ping statistics --- 00:17:30.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.501 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:30.501 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:30.501 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:17:30.501 00:17:30.501 --- 10.0.0.4 ping statistics --- 00:17:30.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.501 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:30.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:17:30.501 00:17:30.501 --- 10.0.0.1 ping statistics --- 00:17:30.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.501 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:30.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:17:30.501 00:17:30.501 --- 10.0.0.2 ping statistics --- 00:17:30.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.501 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:30.501 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=86625 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 86625 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 86625 ']' 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.502 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.502 [2024-12-06 17:17:12.648145] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:30.502 [2024-12-06 17:17:12.648768] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.502 [2024-12-06 17:17:12.797009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.762 [2024-12-06 17:17:12.834879] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.762 [2024-12-06 17:17:12.834950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.762 [2024-12-06 17:17:12.834965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.762 [2024-12-06 17:17:12.834975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.762 [2024-12-06 17:17:12.834983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.762 [2024-12-06 17:17:12.835360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 [2024-12-06 17:17:12.973513] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 null0 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.762 17:17:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 452de1cba12a42e1b8c3b8c1722ba6a8 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:30.762 [2024-12-06 17:17:13.013694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.762 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.021 nvme0n1 00:17:31.022 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.022 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:31.022 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.022 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.022 [ 00:17:31.022 { 00:17:31.022 "aliases": [ 00:17:31.022 "452de1cb-a12a-42e1-b8c3-b8c1722ba6a8" 00:17:31.022 ], 00:17:31.022 "assigned_rate_limits": { 00:17:31.022 "r_mbytes_per_sec": 0, 00:17:31.022 "rw_ios_per_sec": 0, 00:17:31.022 "rw_mbytes_per_sec": 0, 00:17:31.022 "w_mbytes_per_sec": 0 00:17:31.022 }, 00:17:31.022 "block_size": 512, 00:17:31.022 "claimed": false, 00:17:31.022 "driver_specific": { 00:17:31.022 "mp_policy": "active_passive", 00:17:31.022 "nvme": [ 00:17:31.022 { 00:17:31.022 "ctrlr_data": { 00:17:31.022 "ana_reporting": false, 00:17:31.022 "cntlid": 1, 00:17:31.022 "firmware_revision": "25.01", 00:17:31.022 "model_number": "SPDK bdev Controller", 00:17:31.022 "multi_ctrlr": true, 00:17:31.022 "oacs": { 00:17:31.022 "firmware": 0, 00:17:31.022 "format": 0, 00:17:31.022 "ns_manage": 0, 00:17:31.022 "security": 0 00:17:31.022 }, 00:17:31.022 "serial_number": "00000000000000000000", 00:17:31.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.022 "vendor_id": "0x8086" 00:17:31.022 }, 00:17:31.022 "ns_data": { 00:17:31.022 "can_share": true, 00:17:31.022 "id": 1 00:17:31.022 }, 00:17:31.022 "trid": { 00:17:31.022 "adrfam": "IPv4", 00:17:31.022 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.022 "traddr": "10.0.0.3", 00:17:31.022 "trsvcid": "4420", 00:17:31.022 "trtype": "TCP" 00:17:31.022 }, 00:17:31.022 "vs": { 00:17:31.022 "nvme_version": "1.3" 00:17:31.022 } 00:17:31.022 } 00:17:31.022 ] 00:17:31.022 }, 00:17:31.022 "memory_domains": [ 00:17:31.022 { 00:17:31.022 "dma_device_id": "system", 00:17:31.022 "dma_device_type": 1 00:17:31.022 } 00:17:31.022 ], 00:17:31.022 "name": "nvme0n1", 00:17:31.022 "num_blocks": 2097152, 00:17:31.022 "numa_id": -1, 00:17:31.022 "product_name": "NVMe disk", 00:17:31.022 "supported_io_types": { 00:17:31.022 "abort": true, 00:17:31.022 "compare": true, 00:17:31.022 "compare_and_write": true, 00:17:31.022 "copy": true, 00:17:31.022 "flush": true, 00:17:31.022 "get_zone_info": false, 00:17:31.022 "nvme_admin": true, 00:17:31.022 "nvme_io": true, 00:17:31.022 "nvme_io_md": false, 00:17:31.022 "nvme_iov_md": false, 00:17:31.022 "read": true, 00:17:31.022 "reset": true, 00:17:31.022 "seek_data": false, 00:17:31.022 "seek_hole": false, 00:17:31.022 "unmap": false, 00:17:31.022 "write": true, 00:17:31.022 "write_zeroes": true, 00:17:31.022 "zcopy": false, 00:17:31.022 "zone_append": false, 00:17:31.022 "zone_management": false 00:17:31.022 }, 00:17:31.022 "uuid": "452de1cb-a12a-42e1-b8c3-b8c1722ba6a8", 00:17:31.022 "zoned": false 00:17:31.022 } 00:17:31.022 ] 00:17:31.022 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.022 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:31.022 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.022 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.022 [2024-12-06 17:17:13.294524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:31.022 [2024-12-06 17:17:13.294630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa2a360 (9): Bad file descriptor 00:17:31.281 [2024-12-06 17:17:13.436437] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:17:31.281 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.282 [ 00:17:31.282 { 00:17:31.282 "aliases": [ 00:17:31.282 "452de1cb-a12a-42e1-b8c3-b8c1722ba6a8" 00:17:31.282 ], 00:17:31.282 "assigned_rate_limits": { 00:17:31.282 "r_mbytes_per_sec": 0, 00:17:31.282 "rw_ios_per_sec": 0, 00:17:31.282 "rw_mbytes_per_sec": 0, 00:17:31.282 "w_mbytes_per_sec": 0 00:17:31.282 }, 00:17:31.282 "block_size": 512, 00:17:31.282 "claimed": false, 00:17:31.282 "driver_specific": { 00:17:31.282 "mp_policy": "active_passive", 00:17:31.282 "nvme": [ 00:17:31.282 { 00:17:31.282 "ctrlr_data": { 00:17:31.282 "ana_reporting": false, 00:17:31.282 "cntlid": 2, 00:17:31.282 "firmware_revision": "25.01", 00:17:31.282 "model_number": "SPDK bdev Controller", 00:17:31.282 "multi_ctrlr": true, 00:17:31.282 "oacs": { 00:17:31.282 "firmware": 0, 00:17:31.282 "format": 0, 00:17:31.282 "ns_manage": 0, 00:17:31.282 "security": 0 00:17:31.282 }, 00:17:31.282 "serial_number": "00000000000000000000", 00:17:31.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.282 "vendor_id": "0x8086" 00:17:31.282 }, 00:17:31.282 "ns_data": { 00:17:31.282 "can_share": true, 00:17:31.282 "id": 1 00:17:31.282 }, 00:17:31.282 "trid": { 00:17:31.282 "adrfam": "IPv4", 00:17:31.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.282 "traddr": "10.0.0.3", 00:17:31.282 "trsvcid": "4420", 00:17:31.282 "trtype": "TCP" 00:17:31.282 }, 00:17:31.282 "vs": { 00:17:31.282 "nvme_version": "1.3" 00:17:31.282 } 00:17:31.282 } 00:17:31.282 ] 00:17:31.282 }, 00:17:31.282 "memory_domains": [ 00:17:31.282 { 00:17:31.282 "dma_device_id": "system", 00:17:31.282 "dma_device_type": 1 00:17:31.282 } 00:17:31.282 ], 00:17:31.282 "name": "nvme0n1", 00:17:31.282 "num_blocks": 2097152, 00:17:31.282 "numa_id": -1, 00:17:31.282 "product_name": "NVMe disk", 00:17:31.282 "supported_io_types": { 00:17:31.282 "abort": true, 00:17:31.282 "compare": true, 00:17:31.282 "compare_and_write": true, 00:17:31.282 "copy": true, 00:17:31.282 "flush": true, 00:17:31.282 "get_zone_info": false, 00:17:31.282 "nvme_admin": true, 00:17:31.282 "nvme_io": true, 00:17:31.282 "nvme_io_md": false, 00:17:31.282 "nvme_iov_md": false, 00:17:31.282 "read": true, 00:17:31.282 "reset": true, 00:17:31.282 "seek_data": false, 00:17:31.282 "seek_hole": false, 00:17:31.282 "unmap": false, 00:17:31.282 "write": true, 00:17:31.282 "write_zeroes": true, 00:17:31.282 "zcopy": false, 00:17:31.282 "zone_append": false, 00:17:31.282 "zone_management": false 00:17:31.282 }, 00:17:31.282 "uuid": "452de1cb-a12a-42e1-b8c3-b8c1722ba6a8", 00:17:31.282 "zoned": false 00:17:31.282 } 00:17:31.282 ] 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.BRkZmYo3up 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.BRkZmYo3up 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.BRkZmYo3up 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.282 [2024-12-06 17:17:13.518736] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:31.282 [2024-12-06 17:17:13.518924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.282 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.282 [2024-12-06 17:17:13.534751] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.542 nvme0n1 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.542 [ 00:17:31.542 { 00:17:31.542 "aliases": [ 00:17:31.542 "452de1cb-a12a-42e1-b8c3-b8c1722ba6a8" 00:17:31.542 ], 00:17:31.542 "assigned_rate_limits": { 00:17:31.542 "r_mbytes_per_sec": 0, 00:17:31.542 "rw_ios_per_sec": 0, 00:17:31.542 "rw_mbytes_per_sec": 0, 00:17:31.542 "w_mbytes_per_sec": 0 00:17:31.542 }, 00:17:31.542 "block_size": 512, 00:17:31.542 "claimed": false, 00:17:31.542 "driver_specific": { 00:17:31.542 "mp_policy": "active_passive", 00:17:31.542 "nvme": [ 00:17:31.542 { 00:17:31.542 "ctrlr_data": { 00:17:31.542 "ana_reporting": false, 00:17:31.542 "cntlid": 3, 00:17:31.542 "firmware_revision": "25.01", 00:17:31.542 "model_number": "SPDK bdev Controller", 00:17:31.542 "multi_ctrlr": true, 00:17:31.542 "oacs": { 00:17:31.542 "firmware": 0, 00:17:31.542 "format": 0, 00:17:31.542 "ns_manage": 0, 00:17:31.542 "security": 0 00:17:31.542 }, 00:17:31.542 "serial_number": "00000000000000000000", 00:17:31.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.542 "vendor_id": "0x8086" 00:17:31.542 }, 00:17:31.542 "ns_data": { 00:17:31.542 "can_share": true, 00:17:31.542 "id": 1 00:17:31.542 }, 00:17:31.542 "trid": { 00:17:31.542 "adrfam": "IPv4", 00:17:31.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:31.542 "traddr": "10.0.0.3", 00:17:31.542 "trsvcid": "4421", 00:17:31.542 "trtype": "TCP" 00:17:31.542 }, 00:17:31.542 "vs": { 00:17:31.542 "nvme_version": "1.3" 00:17:31.542 } 00:17:31.542 } 00:17:31.542 ] 00:17:31.542 }, 00:17:31.542 "memory_domains": [ 00:17:31.542 { 00:17:31.542 "dma_device_id": "system", 00:17:31.542 "dma_device_type": 1 00:17:31.542 } 00:17:31.542 ], 00:17:31.542 "name": "nvme0n1", 00:17:31.542 "num_blocks": 2097152, 00:17:31.542 "numa_id": -1, 00:17:31.542 "product_name": "NVMe disk", 00:17:31.542 "supported_io_types": { 00:17:31.542 "abort": true, 00:17:31.542 "compare": true, 00:17:31.542 "compare_and_write": true, 00:17:31.542 "copy": true, 00:17:31.542 "flush": true, 00:17:31.542 "get_zone_info": false, 00:17:31.542 "nvme_admin": true, 00:17:31.542 "nvme_io": true, 00:17:31.542 "nvme_io_md": false, 00:17:31.542 "nvme_iov_md": false, 00:17:31.542 "read": true, 00:17:31.542 "reset": true, 00:17:31.542 "seek_data": false, 00:17:31.542 "seek_hole": false, 00:17:31.542 "unmap": false, 00:17:31.542 "write": true, 00:17:31.542 "write_zeroes": true, 00:17:31.542 "zcopy": false, 00:17:31.542 "zone_append": false, 00:17:31.542 "zone_management": false 00:17:31.542 }, 00:17:31.542 "uuid": "452de1cb-a12a-42e1-b8c3-b8c1722ba6a8", 00:17:31.542 "zoned": false 00:17:31.542 } 00:17:31.542 ] 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.BRkZmYo3up 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:17:31.542 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.543 rmmod nvme_tcp 00:17:31.543 rmmod nvme_fabrics 00:17:31.543 rmmod nvme_keyring 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 86625 ']' 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 86625 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 86625 ']' 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 86625 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86625 00:17:31.543 killing process with pid 86625 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86625' 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 86625 00:17:31.543 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 86625 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:31.854 17:17:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:31.854 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:31.854 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:31.854 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:31.854 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:31.854 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:31.854 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:31.854 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:31.854 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:17:32.114 00:17:32.114 real 0m2.227s 00:17:32.114 user 0m1.686s 00:17:32.114 sys 0m0.649s 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.114 ************************************ 00:17:32.114 END TEST nvmf_async_init 00:17:32.114 ************************************ 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.114 ************************************ 00:17:32.114 START TEST dma 00:17:32.114 ************************************ 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:32.114 * Looking for test storage... 00:17:32.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:17:32.114 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.372 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:32.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.373 --rc genhtml_branch_coverage=1 00:17:32.373 --rc genhtml_function_coverage=1 00:17:32.373 --rc genhtml_legend=1 00:17:32.373 --rc geninfo_all_blocks=1 00:17:32.373 --rc geninfo_unexecuted_blocks=1 00:17:32.373 00:17:32.373 ' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:32.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.373 --rc genhtml_branch_coverage=1 00:17:32.373 --rc genhtml_function_coverage=1 00:17:32.373 --rc genhtml_legend=1 00:17:32.373 --rc geninfo_all_blocks=1 00:17:32.373 --rc geninfo_unexecuted_blocks=1 00:17:32.373 00:17:32.373 ' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:32.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.373 --rc genhtml_branch_coverage=1 00:17:32.373 --rc genhtml_function_coverage=1 00:17:32.373 --rc genhtml_legend=1 00:17:32.373 --rc geninfo_all_blocks=1 00:17:32.373 --rc geninfo_unexecuted_blocks=1 00:17:32.373 00:17:32.373 ' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:32.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.373 --rc genhtml_branch_coverage=1 00:17:32.373 --rc genhtml_function_coverage=1 00:17:32.373 --rc genhtml_legend=1 00:17:32.373 --rc geninfo_all_blocks=1 00:17:32.373 --rc geninfo_unexecuted_blocks=1 00:17:32.373 00:17:32.373 ' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.373 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:17:32.373 00:17:32.373 real 0m0.222s 00:17:32.373 user 0m0.136s 00:17:32.373 sys 0m0.098s 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:32.373 ************************************ 00:17:32.373 END TEST dma 00:17:32.373 ************************************ 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:32.373 ************************************ 00:17:32.373 START TEST nvmf_identify 00:17:32.373 ************************************ 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:32.373 * Looking for test storage... 00:17:32.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:17:32.373 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:32.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.632 --rc genhtml_branch_coverage=1 00:17:32.632 --rc genhtml_function_coverage=1 00:17:32.632 --rc genhtml_legend=1 00:17:32.632 --rc geninfo_all_blocks=1 00:17:32.632 --rc geninfo_unexecuted_blocks=1 00:17:32.632 00:17:32.632 ' 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:32.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.632 --rc genhtml_branch_coverage=1 00:17:32.632 --rc genhtml_function_coverage=1 00:17:32.632 --rc genhtml_legend=1 00:17:32.632 --rc geninfo_all_blocks=1 00:17:32.632 --rc geninfo_unexecuted_blocks=1 00:17:32.632 00:17:32.632 ' 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:32.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.632 --rc genhtml_branch_coverage=1 00:17:32.632 --rc genhtml_function_coverage=1 00:17:32.632 --rc genhtml_legend=1 00:17:32.632 --rc geninfo_all_blocks=1 00:17:32.632 --rc geninfo_unexecuted_blocks=1 00:17:32.632 00:17:32.632 ' 00:17:32.632 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:32.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:32.632 --rc genhtml_branch_coverage=1 00:17:32.633 --rc genhtml_function_coverage=1 00:17:32.633 --rc genhtml_legend=1 00:17:32.633 --rc geninfo_all_blocks=1 00:17:32.633 --rc geninfo_unexecuted_blocks=1 00:17:32.633 00:17:32.633 ' 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:32.633 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:32.633 Cannot find device "nvmf_init_br" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:32.633 Cannot find device "nvmf_init_br2" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:32.633 Cannot find device "nvmf_tgt_br" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:32.633 Cannot find device "nvmf_tgt_br2" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:32.633 Cannot find device "nvmf_init_br" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:32.633 Cannot find device "nvmf_init_br2" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:32.633 Cannot find device "nvmf_tgt_br" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:32.633 Cannot find device "nvmf_tgt_br2" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:32.633 Cannot find device "nvmf_br" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:32.633 Cannot find device "nvmf_init_if" 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:32.633 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:32.633 Cannot find device "nvmf_init_if2" 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:32.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:32.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:32.634 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:32.892 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:32.892 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:32.892 17:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:32.892 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:32.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:32.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:17:32.893 00:17:32.893 --- 10.0.0.3 ping statistics --- 00:17:32.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.893 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:32.893 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:32.893 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.079 ms 00:17:32.893 00:17:32.893 --- 10.0.0.4 ping statistics --- 00:17:32.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.893 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:32.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:32.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:32.893 00:17:32.893 --- 10.0.0.1 ping statistics --- 00:17:32.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.893 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:32.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:32.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:17:32.893 00:17:32.893 --- 10.0.0.2 ping statistics --- 00:17:32.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:32.893 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.893 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86941 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86941 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 86941 ']' 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.150 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.150 [2024-12-06 17:17:15.254780] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:33.150 [2024-12-06 17:17:15.254880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.150 [2024-12-06 17:17:15.409823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:33.407 [2024-12-06 17:17:15.450043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.407 [2024-12-06 17:17:15.450125] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.408 [2024-12-06 17:17:15.450151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.408 [2024-12-06 17:17:15.450169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.408 [2024-12-06 17:17:15.450183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.408 [2024-12-06 17:17:15.451061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.408 [2024-12-06 17:17:15.451110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:33.408 [2024-12-06 17:17:15.451166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.408 [2024-12-06 17:17:15.451175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 [2024-12-06 17:17:15.550896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 Malloc0 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 [2024-12-06 17:17:15.647994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:33.408 [ 00:17:33.408 { 00:17:33.408 "allow_any_host": true, 00:17:33.408 "hosts": [], 00:17:33.408 "listen_addresses": [ 00:17:33.408 { 00:17:33.408 "adrfam": "IPv4", 00:17:33.408 "traddr": "10.0.0.3", 00:17:33.408 "trsvcid": "4420", 00:17:33.408 "trtype": "TCP" 00:17:33.408 } 00:17:33.408 ], 00:17:33.408 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:33.408 "subtype": "Discovery" 00:17:33.408 }, 00:17:33.408 { 00:17:33.408 "allow_any_host": true, 00:17:33.408 "hosts": [], 00:17:33.408 "listen_addresses": [ 00:17:33.408 { 00:17:33.408 "adrfam": "IPv4", 00:17:33.408 "traddr": "10.0.0.3", 00:17:33.408 "trsvcid": "4420", 00:17:33.408 "trtype": "TCP" 00:17:33.408 } 00:17:33.408 ], 00:17:33.408 "max_cntlid": 65519, 00:17:33.408 "max_namespaces": 32, 00:17:33.408 "min_cntlid": 1, 00:17:33.408 "model_number": "SPDK bdev Controller", 00:17:33.408 "namespaces": [ 00:17:33.408 { 00:17:33.408 "bdev_name": "Malloc0", 00:17:33.408 "eui64": "ABCDEF0123456789", 00:17:33.408 "name": "Malloc0", 00:17:33.408 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:33.408 "nsid": 1, 00:17:33.408 "uuid": "2767bade-5bf0-4d65-a344-c24196d24b73" 00:17:33.408 } 00:17:33.408 ], 00:17:33.408 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.408 "serial_number": "SPDK00000000000001", 00:17:33.408 "subtype": "NVMe" 00:17:33.408 } 00:17:33.408 ] 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.408 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:33.408 [2024-12-06 17:17:15.701032] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:33.408 [2024-12-06 17:17:15.701102] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86975 ] 00:17:33.667 [2024-12-06 17:17:15.866801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:33.667 [2024-12-06 17:17:15.866869] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:33.667 [2024-12-06 17:17:15.866876] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:33.667 [2024-12-06 17:17:15.866893] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:33.667 [2024-12-06 17:17:15.866907] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:33.667 [2024-12-06 17:17:15.867248] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:33.667 [2024-12-06 17:17:15.867333] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1149d90 0 00:17:33.667 [2024-12-06 17:17:15.873298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:33.667 [2024-12-06 17:17:15.873332] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:33.667 [2024-12-06 17:17:15.873340] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:33.667 [2024-12-06 17:17:15.873345] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:33.667 [2024-12-06 17:17:15.873385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.667 [2024-12-06 17:17:15.873397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.873405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.668 [2024-12-06 17:17:15.873425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:33.668 [2024-12-06 17:17:15.873473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.668 [2024-12-06 17:17:15.881299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.668 [2024-12-06 17:17:15.881327] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.668 [2024-12-06 17:17:15.881333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.668 [2024-12-06 17:17:15.881350] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:33.668 [2024-12-06 17:17:15.881359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:33.668 [2024-12-06 17:17:15.881366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:33.668 [2024-12-06 17:17:15.881394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.668 [2024-12-06 17:17:15.881425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.668 [2024-12-06 17:17:15.881460] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.668 [2024-12-06 17:17:15.881540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.668 [2024-12-06 17:17:15.881548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.668 [2024-12-06 17:17:15.881552] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.668 [2024-12-06 17:17:15.881563] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:33.668 [2024-12-06 17:17:15.881571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:33.668 [2024-12-06 17:17:15.881580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881584] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.668 [2024-12-06 17:17:15.881597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.668 [2024-12-06 17:17:15.881617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.668 [2024-12-06 17:17:15.881676] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.668 [2024-12-06 17:17:15.881683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.668 [2024-12-06 17:17:15.881687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.668 [2024-12-06 17:17:15.881698] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:33.668 [2024-12-06 17:17:15.881707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:33.668 [2024-12-06 17:17:15.881715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.668 [2024-12-06 17:17:15.881731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.668 [2024-12-06 17:17:15.881750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.668 [2024-12-06 17:17:15.881804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.668 [2024-12-06 17:17:15.881811] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.668 [2024-12-06 17:17:15.881815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.668 [2024-12-06 17:17:15.881825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:33.668 [2024-12-06 17:17:15.881836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881841] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881845] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.668 [2024-12-06 17:17:15.881852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.668 [2024-12-06 17:17:15.881870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.668 [2024-12-06 17:17:15.881920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.668 [2024-12-06 17:17:15.881927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.668 [2024-12-06 17:17:15.881931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.881936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.668 [2024-12-06 17:17:15.881941] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:33.668 [2024-12-06 17:17:15.881947] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:33.668 [2024-12-06 17:17:15.881955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:33.668 [2024-12-06 17:17:15.882068] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:33.668 [2024-12-06 17:17:15.882074] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:33.668 [2024-12-06 17:17:15.882084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.882089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.882093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.668 [2024-12-06 17:17:15.882101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.668 [2024-12-06 17:17:15.882122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.668 [2024-12-06 17:17:15.882176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.668 [2024-12-06 17:17:15.882184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.668 [2024-12-06 17:17:15.882188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.882192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.668 [2024-12-06 17:17:15.882198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:33.668 [2024-12-06 17:17:15.882208] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.882213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.882217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.668 [2024-12-06 17:17:15.882225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.668 [2024-12-06 17:17:15.882244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.668 [2024-12-06 17:17:15.882318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.668 [2024-12-06 17:17:15.882328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.668 [2024-12-06 17:17:15.882332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.882336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.668 [2024-12-06 17:17:15.882354] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:33.668 [2024-12-06 17:17:15.882361] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:33.668 [2024-12-06 17:17:15.882370] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:33.668 [2024-12-06 17:17:15.882387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:33.668 [2024-12-06 17:17:15.882404] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.668 [2024-12-06 17:17:15.882412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.668 [2024-12-06 17:17:15.882424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.668 [2024-12-06 17:17:15.882458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.668 [2024-12-06 17:17:15.882550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.668 [2024-12-06 17:17:15.882558] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.668 [2024-12-06 17:17:15.882562] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882566] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1149d90): datao=0, datal=4096, cccid=0 00:17:33.669 [2024-12-06 17:17:15.882572] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118a600) on tqpair(0x1149d90): expected_datao=0, payload_size=4096 00:17:33.669 [2024-12-06 17:17:15.882577] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882586] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882591] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882600] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.669 [2024-12-06 17:17:15.882606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.669 [2024-12-06 17:17:15.882610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882615] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.669 [2024-12-06 17:17:15.882624] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:33.669 [2024-12-06 17:17:15.882630] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:33.669 [2024-12-06 17:17:15.882635] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:33.669 [2024-12-06 17:17:15.882641] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:33.669 [2024-12-06 17:17:15.882646] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:33.669 [2024-12-06 17:17:15.882652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:33.669 [2024-12-06 17:17:15.882662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:33.669 [2024-12-06 17:17:15.882670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.882687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.669 [2024-12-06 17:17:15.882708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.669 [2024-12-06 17:17:15.882775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.669 [2024-12-06 17:17:15.882782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.669 [2024-12-06 17:17:15.882786] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882790] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.669 [2024-12-06 17:17:15.882806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.882823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.669 [2024-12-06 17:17:15.882830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.882845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.669 [2024-12-06 17:17:15.882851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.882866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.669 [2024-12-06 17:17:15.882873] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.882888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.669 [2024-12-06 17:17:15.882893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:33.669 [2024-12-06 17:17:15.882902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:33.669 [2024-12-06 17:17:15.882910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.882914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.882921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.669 [2024-12-06 17:17:15.882943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a600, cid 0, qid 0 00:17:33.669 [2024-12-06 17:17:15.882950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a780, cid 1, qid 0 00:17:33.669 [2024-12-06 17:17:15.882956] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118a900, cid 2, qid 0 00:17:33.669 [2024-12-06 17:17:15.882961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.669 [2024-12-06 17:17:15.882966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118ac00, cid 4, qid 0 00:17:33.669 [2024-12-06 17:17:15.883059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.669 [2024-12-06 17:17:15.883066] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.669 [2024-12-06 17:17:15.883070] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118ac00) on tqpair=0x1149d90 00:17:33.669 [2024-12-06 17:17:15.883080] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:33.669 [2024-12-06 17:17:15.883090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:33.669 [2024-12-06 17:17:15.883103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.883116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.669 [2024-12-06 17:17:15.883136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118ac00, cid 4, qid 0 00:17:33.669 [2024-12-06 17:17:15.883202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.669 [2024-12-06 17:17:15.883209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.669 [2024-12-06 17:17:15.883213] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883217] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1149d90): datao=0, datal=4096, cccid=4 00:17:33.669 [2024-12-06 17:17:15.883222] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118ac00) on tqpair(0x1149d90): expected_datao=0, payload_size=4096 00:17:33.669 [2024-12-06 17:17:15.883227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883234] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883238] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.669 [2024-12-06 17:17:15.883253] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.669 [2024-12-06 17:17:15.883257] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118ac00) on tqpair=0x1149d90 00:17:33.669 [2024-12-06 17:17:15.883292] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:33.669 [2024-12-06 17:17:15.883329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.883345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.669 [2024-12-06 17:17:15.883353] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1149d90) 00:17:33.669 [2024-12-06 17:17:15.883368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.669 [2024-12-06 17:17:15.883403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118ac00, cid 4, qid 0 00:17:33.669 [2024-12-06 17:17:15.883413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118ad80, cid 5, qid 0 00:17:33.669 [2024-12-06 17:17:15.883528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.669 [2024-12-06 17:17:15.883535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.669 [2024-12-06 17:17:15.883539] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.669 [2024-12-06 17:17:15.883543] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1149d90): datao=0, datal=1024, cccid=4 00:17:33.670 [2024-12-06 17:17:15.883548] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118ac00) on tqpair(0x1149d90): expected_datao=0, payload_size=1024 00:17:33.670 [2024-12-06 17:17:15.883553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.883560] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.883564] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.883571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.670 [2024-12-06 17:17:15.883577] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.670 [2024-12-06 17:17:15.883581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.883585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118ad80) on tqpair=0x1149d90 00:17:33.670 [2024-12-06 17:17:15.929303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.670 [2024-12-06 17:17:15.929341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.670 [2024-12-06 17:17:15.929348] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118ac00) on tqpair=0x1149d90 00:17:33.670 [2024-12-06 17:17:15.929381] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1149d90) 00:17:33.670 [2024-12-06 17:17:15.929416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.670 [2024-12-06 17:17:15.929468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118ac00, cid 4, qid 0 00:17:33.670 [2024-12-06 17:17:15.929585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.670 [2024-12-06 17:17:15.929593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.670 [2024-12-06 17:17:15.929597] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929601] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1149d90): datao=0, datal=3072, cccid=4 00:17:33.670 [2024-12-06 17:17:15.929606] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118ac00) on tqpair(0x1149d90): expected_datao=0, payload_size=3072 00:17:33.670 [2024-12-06 17:17:15.929612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929621] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929626] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.670 [2024-12-06 17:17:15.929642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.670 [2024-12-06 17:17:15.929646] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929650] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118ac00) on tqpair=0x1149d90 00:17:33.670 [2024-12-06 17:17:15.929664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1149d90) 00:17:33.670 [2024-12-06 17:17:15.929678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.670 [2024-12-06 17:17:15.929708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118ac00, cid 4, qid 0 00:17:33.670 [2024-12-06 17:17:15.929783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.670 [2024-12-06 17:17:15.929790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.670 [2024-12-06 17:17:15.929794] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929798] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1149d90): datao=0, datal=8, cccid=4 00:17:33.670 [2024-12-06 17:17:15.929803] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x118ac00) on tqpair(0x1149d90): expected_datao=0, payload_size=8 00:17:33.670 [2024-12-06 17:17:15.929808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929816] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.670 [2024-12-06 17:17:15.929820] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.934 ===================================================== 00:17:33.934 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:33.934 ===================================================== 00:17:33.934 Controller Capabilities/Features 00:17:33.934 ================================ 00:17:33.934 Vendor ID: 0000 00:17:33.934 Subsystem Vendor ID: 0000 00:17:33.934 Serial Number: .................... 00:17:33.934 Model Number: ........................................ 00:17:33.934 Firmware Version: 25.01 00:17:33.934 Recommended Arb Burst: 0 00:17:33.934 IEEE OUI Identifier: 00 00 00 00:17:33.934 Multi-path I/O 00:17:33.934 May have multiple subsystem ports: No 00:17:33.934 May have multiple controllers: No 00:17:33.934 Associated with SR-IOV VF: No 00:17:33.934 Max Data Transfer Size: 131072 00:17:33.934 Max Number of Namespaces: 0 00:17:33.934 Max Number of I/O Queues: 1024 00:17:33.934 NVMe Specification Version (VS): 1.3 00:17:33.934 NVMe Specification Version (Identify): 1.3 00:17:33.934 Maximum Queue Entries: 128 00:17:33.934 Contiguous Queues Required: Yes 00:17:33.934 Arbitration Mechanisms Supported 00:17:33.934 Weighted Round Robin: Not Supported 00:17:33.934 Vendor Specific: Not Supported 00:17:33.934 Reset Timeout: 15000 ms 00:17:33.934 Doorbell Stride: 4 bytes 00:17:33.934 NVM Subsystem Reset: Not Supported 00:17:33.934 Command Sets Supported 00:17:33.934 NVM Command Set: Supported 00:17:33.934 Boot Partition: Not Supported 00:17:33.934 Memory Page Size Minimum: 4096 bytes 00:17:33.934 Memory Page Size Maximum: 4096 bytes 00:17:33.934 Persistent Memory Region: Not Supported 00:17:33.934 Optional Asynchronous Events Supported 00:17:33.934 Namespace Attribute Notices: Not Supported 00:17:33.934 Firmware Activation Notices: Not Supported 00:17:33.934 ANA Change Notices: Not Supported 00:17:33.934 PLE Aggregate Log Change Notices: Not Supported 00:17:33.934 LBA Status Info Alert Notices: Not Supported 00:17:33.934 EGE Aggregate Log Change Notices: Not Supported 00:17:33.934 Normal NVM Subsystem Shutdown event: Not Supported 00:17:33.934 Zone Descriptor Change Notices: Not Supported 00:17:33.934 Discovery Log Change Notices: Supported 00:17:33.934 Controller Attributes 00:17:33.934 128-bit Host Identifier: Not Supported 00:17:33.934 Non-Operational Permissive Mode: Not Supported 00:17:33.934 NVM Sets: Not Supported 00:17:33.934 Read Recovery Levels: Not Supported 00:17:33.934 Endurance Groups: Not Supported 00:17:33.934 Predictable Latency Mode: Not Supported 00:17:33.934 Traffic Based Keep ALive: Not Supported 00:17:33.934 Namespace Granularity: Not Supported 00:17:33.934 SQ Associations: Not Supported 00:17:33.934 UUID List: Not Supported 00:17:33.934 Multi-Domain Subsystem: Not Supported 00:17:33.934 Fixed Capacity Management: Not Supported 00:17:33.934 Variable Capacity Management: Not Supported 00:17:33.934 Delete Endurance Group: Not Supported 00:17:33.935 Delete NVM Set: Not Supported 00:17:33.935 Extended LBA Formats Supported: Not Supported 00:17:33.935 Flexible Data Placement Supported: Not Supported 00:17:33.935 00:17:33.935 Controller Memory Buffer Support 00:17:33.935 ================================ 00:17:33.935 Supported: No 00:17:33.935 00:17:33.935 Persistent Memory Region Support 00:17:33.935 ================================ 00:17:33.935 Supported: No 00:17:33.935 00:17:33.935 Admin Command Set Attributes 00:17:33.935 ============================ 00:17:33.935 Security Send/Receive: Not Supported 00:17:33.935 Format NVM: Not Supported 00:17:33.935 Firmware Activate/Download: Not Supported 00:17:33.935 Namespace Management: Not Supported 00:17:33.935 Device Self-Test: Not Supported 00:17:33.935 Directives: Not Supported 00:17:33.935 NVMe-MI: Not Supported 00:17:33.935 Virtualization Management: Not Supported 00:17:33.935 Doorbell Buffer Config: Not Supported 00:17:33.935 Get LBA Status Capability: Not Supported 00:17:33.935 Command & Feature Lockdown Capability: Not Supported 00:17:33.935 Abort Command Limit: 1 00:17:33.935 Async Event Request Limit: 4 00:17:33.935 Number of Firmware Slots: N/A 00:17:33.935 Firmware Slot 1 Read-Only: N/A 00:17:33.935 Firm[2024-12-06 17:17:15.971355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.971389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.971396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118ac00) on tqpair=0x1149d90 00:17:33.935 ware Activation Without Reset: N/A 00:17:33.935 Multiple Update Detection Support: N/A 00:17:33.935 Firmware Update Granularity: No Information Provided 00:17:33.935 Per-Namespace SMART Log: No 00:17:33.935 Asymmetric Namespace Access Log Page: Not Supported 00:17:33.935 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:33.935 Command Effects Log Page: Not Supported 00:17:33.935 Get Log Page Extended Data: Supported 00:17:33.935 Telemetry Log Pages: Not Supported 00:17:33.935 Persistent Event Log Pages: Not Supported 00:17:33.935 Supported Log Pages Log Page: May Support 00:17:33.935 Commands Supported & Effects Log Page: Not Supported 00:17:33.935 Feature Identifiers & Effects Log Page:May Support 00:17:33.935 NVMe-MI Commands & Effects Log Page: May Support 00:17:33.935 Data Area 4 for Telemetry Log: Not Supported 00:17:33.935 Error Log Page Entries Supported: 128 00:17:33.935 Keep Alive: Not Supported 00:17:33.935 00:17:33.935 NVM Command Set Attributes 00:17:33.935 ========================== 00:17:33.935 Submission Queue Entry Size 00:17:33.935 Max: 1 00:17:33.935 Min: 1 00:17:33.935 Completion Queue Entry Size 00:17:33.935 Max: 1 00:17:33.935 Min: 1 00:17:33.935 Number of Namespaces: 0 00:17:33.935 Compare Command: Not Supported 00:17:33.935 Write Uncorrectable Command: Not Supported 00:17:33.935 Dataset Management Command: Not Supported 00:17:33.935 Write Zeroes Command: Not Supported 00:17:33.935 Set Features Save Field: Not Supported 00:17:33.935 Reservations: Not Supported 00:17:33.935 Timestamp: Not Supported 00:17:33.935 Copy: Not Supported 00:17:33.935 Volatile Write Cache: Not Present 00:17:33.935 Atomic Write Unit (Normal): 1 00:17:33.935 Atomic Write Unit (PFail): 1 00:17:33.935 Atomic Compare & Write Unit: 1 00:17:33.935 Fused Compare & Write: Supported 00:17:33.935 Scatter-Gather List 00:17:33.935 SGL Command Set: Supported 00:17:33.935 SGL Keyed: Supported 00:17:33.935 SGL Bit Bucket Descriptor: Not Supported 00:17:33.935 SGL Metadata Pointer: Not Supported 00:17:33.935 Oversized SGL: Not Supported 00:17:33.935 SGL Metadata Address: Not Supported 00:17:33.935 SGL Offset: Supported 00:17:33.935 Transport SGL Data Block: Not Supported 00:17:33.935 Replay Protected Memory Block: Not Supported 00:17:33.935 00:17:33.935 Firmware Slot Information 00:17:33.935 ========================= 00:17:33.935 Active slot: 0 00:17:33.935 00:17:33.935 00:17:33.935 Error Log 00:17:33.935 ========= 00:17:33.935 00:17:33.935 Active Namespaces 00:17:33.935 ================= 00:17:33.935 Discovery Log Page 00:17:33.935 ================== 00:17:33.935 Generation Counter: 2 00:17:33.935 Number of Records: 2 00:17:33.935 Record Format: 0 00:17:33.935 00:17:33.935 Discovery Log Entry 0 00:17:33.935 ---------------------- 00:17:33.935 Transport Type: 3 (TCP) 00:17:33.935 Address Family: 1 (IPv4) 00:17:33.935 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:33.935 Entry Flags: 00:17:33.935 Duplicate Returned Information: 1 00:17:33.935 Explicit Persistent Connection Support for Discovery: 1 00:17:33.935 Transport Requirements: 00:17:33.935 Secure Channel: Not Required 00:17:33.935 Port ID: 0 (0x0000) 00:17:33.935 Controller ID: 65535 (0xffff) 00:17:33.935 Admin Max SQ Size: 128 00:17:33.935 Transport Service Identifier: 4420 00:17:33.935 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:33.935 Transport Address: 10.0.0.3 00:17:33.935 Discovery Log Entry 1 00:17:33.935 ---------------------- 00:17:33.935 Transport Type: 3 (TCP) 00:17:33.935 Address Family: 1 (IPv4) 00:17:33.935 Subsystem Type: 2 (NVM Subsystem) 00:17:33.935 Entry Flags: 00:17:33.935 Duplicate Returned Information: 0 00:17:33.935 Explicit Persistent Connection Support for Discovery: 0 00:17:33.935 Transport Requirements: 00:17:33.935 Secure Channel: Not Required 00:17:33.935 Port ID: 0 (0x0000) 00:17:33.935 Controller ID: 65535 (0xffff) 00:17:33.935 Admin Max SQ Size: 128 00:17:33.935 Transport Service Identifier: 4420 00:17:33.935 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:33.935 Transport Address: 10.0.0.3 [2024-12-06 17:17:15.971527] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:33.935 [2024-12-06 17:17:15.971543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a600) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.971551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.935 [2024-12-06 17:17:15.971558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a780) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.971564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.935 [2024-12-06 17:17:15.971569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118a900) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.971574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.935 [2024-12-06 17:17:15.971580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.971585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.935 [2024-12-06 17:17:15.971598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.971618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.971647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.971712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.971719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.971723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.971738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.971755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.971780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.971860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.971867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.971871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.971885] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:33.935 [2024-12-06 17:17:15.971891] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:33.935 [2024-12-06 17:17:15.971903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.971912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.971920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.971940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.971999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.972006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.972010] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972014] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.972026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.972043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.972061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.972114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.972121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.972125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.972140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.972157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.972175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.972226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.972232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.972236] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.972251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.972285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.972308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.972366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.972373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.972377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.972392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.972409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.972427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.972483] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.972490] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.972494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.972509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.972525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.972543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.972599] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.972606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.935 [2024-12-06 17:17:15.972609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.935 [2024-12-06 17:17:15.972624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.935 [2024-12-06 17:17:15.972633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.935 [2024-12-06 17:17:15.972641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.935 [2024-12-06 17:17:15.972659] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.935 [2024-12-06 17:17:15.972711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.935 [2024-12-06 17:17:15.972718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:15.972722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.936 [2024-12-06 17:17:15.972737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.936 [2024-12-06 17:17:15.972754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:15.972771] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.936 [2024-12-06 17:17:15.972826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:15.972833] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:15.972837] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.936 [2024-12-06 17:17:15.972852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.936 [2024-12-06 17:17:15.972869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:15.972886] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.936 [2024-12-06 17:17:15.972943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:15.972950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:15.972954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.936 [2024-12-06 17:17:15.972969] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.972978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.936 [2024-12-06 17:17:15.972985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:15.973003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.936 [2024-12-06 17:17:15.973053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:15.973060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:15.973064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.973068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.936 [2024-12-06 17:17:15.973079] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.973084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.973088] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.936 [2024-12-06 17:17:15.973095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:15.973113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.936 [2024-12-06 17:17:15.973171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:15.973178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:15.973182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.973186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.936 [2024-12-06 17:17:15.973197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.973202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.973206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.936 [2024-12-06 17:17:15.973214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:15.973231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.936 [2024-12-06 17:17:15.977293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:15.977314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:15.977319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.977324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.936 [2024-12-06 17:17:15.977340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.977346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.977350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1149d90) 00:17:33.936 [2024-12-06 17:17:15.977360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:15.977401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x118aa80, cid 3, qid 0 00:17:33.936 [2024-12-06 17:17:15.977465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:15.977475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:15.977479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:15.977484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x118aa80) on tqpair=0x1149d90 00:17:33.936 [2024-12-06 17:17:15.977493] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:17:33.936 00:17:33.936 17:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:33.936 [2024-12-06 17:17:16.021145] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:33.936 [2024-12-06 17:17:16.021204] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86982 ] 00:17:33.936 [2024-12-06 17:17:16.181591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:33.936 [2024-12-06 17:17:16.181658] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:33.936 [2024-12-06 17:17:16.181666] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:33.936 [2024-12-06 17:17:16.181683] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:33.936 [2024-12-06 17:17:16.181696] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:33.936 [2024-12-06 17:17:16.182007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:33.936 [2024-12-06 17:17:16.182072] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x15edd90 0 00:17:33.936 [2024-12-06 17:17:16.196298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:33.936 [2024-12-06 17:17:16.196325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:33.936 [2024-12-06 17:17:16.196333] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:33.936 [2024-12-06 17:17:16.196337] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:33.936 [2024-12-06 17:17:16.196371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.196379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.196384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.936 [2024-12-06 17:17:16.196399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:33.936 [2024-12-06 17:17:16.196433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.936 [2024-12-06 17:17:16.204295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:16.204324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:16.204330] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204336] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.936 [2024-12-06 17:17:16.204349] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:33.936 [2024-12-06 17:17:16.204358] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:33.936 [2024-12-06 17:17:16.204366] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:33.936 [2024-12-06 17:17:16.204387] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.936 [2024-12-06 17:17:16.204409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:16.204442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.936 [2024-12-06 17:17:16.204525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:16.204535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:16.204539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.936 [2024-12-06 17:17:16.204550] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:33.936 [2024-12-06 17:17:16.204560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:33.936 [2024-12-06 17:17:16.204569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.936 [2024-12-06 17:17:16.204586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:16.204610] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.936 [2024-12-06 17:17:16.204667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:16.204677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:16.204684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.936 [2024-12-06 17:17:16.204701] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:33.936 [2024-12-06 17:17:16.204713] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:33.936 [2024-12-06 17:17:16.204722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.936 [2024-12-06 17:17:16.204738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:16.204760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.936 [2024-12-06 17:17:16.204815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:16.204823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:16.204828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.936 [2024-12-06 17:17:16.204839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:33.936 [2024-12-06 17:17:16.204850] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.936 [2024-12-06 17:17:16.204867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:16.204896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.936 [2024-12-06 17:17:16.204946] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:16.204954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:16.204958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.204962] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.936 [2024-12-06 17:17:16.204968] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:33.936 [2024-12-06 17:17:16.204973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:33.936 [2024-12-06 17:17:16.204982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:33.936 [2024-12-06 17:17:16.205095] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:33.936 [2024-12-06 17:17:16.205105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:33.936 [2024-12-06 17:17:16.205121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.205129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.205133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.936 [2024-12-06 17:17:16.205142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:16.205165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.936 [2024-12-06 17:17:16.205226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:16.205234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:16.205238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.205242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.936 [2024-12-06 17:17:16.205248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:33.936 [2024-12-06 17:17:16.205259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.205278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.205284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.936 [2024-12-06 17:17:16.205292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:16.205314] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.936 [2024-12-06 17:17:16.205373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.936 [2024-12-06 17:17:16.205380] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.936 [2024-12-06 17:17:16.205384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.205388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.936 [2024-12-06 17:17:16.205393] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:33.936 [2024-12-06 17:17:16.205399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:33.936 [2024-12-06 17:17:16.205408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:33.936 [2024-12-06 17:17:16.205419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:33.936 [2024-12-06 17:17:16.205431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.936 [2024-12-06 17:17:16.205436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.936 [2024-12-06 17:17:16.205444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.936 [2024-12-06 17:17:16.205466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.936 [2024-12-06 17:17:16.205569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.937 [2024-12-06 17:17:16.205578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.937 [2024-12-06 17:17:16.205582] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205586] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15edd90): datao=0, datal=4096, cccid=0 00:17:33.937 [2024-12-06 17:17:16.205591] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162e600) on tqpair(0x15edd90): expected_datao=0, payload_size=4096 00:17:33.937 [2024-12-06 17:17:16.205597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205605] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205610] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.937 [2024-12-06 17:17:16.205626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.937 [2024-12-06 17:17:16.205630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.937 [2024-12-06 17:17:16.205644] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:33.937 [2024-12-06 17:17:16.205649] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:33.937 [2024-12-06 17:17:16.205654] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:33.937 [2024-12-06 17:17:16.205659] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:33.937 [2024-12-06 17:17:16.205665] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:33.937 [2024-12-06 17:17:16.205670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.205680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.205688] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.205705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.937 [2024-12-06 17:17:16.205728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.937 [2024-12-06 17:17:16.205788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.937 [2024-12-06 17:17:16.205796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.937 [2024-12-06 17:17:16.205800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.937 [2024-12-06 17:17:16.205818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205828] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.205835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.937 [2024-12-06 17:17:16.205843] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205847] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.205857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.937 [2024-12-06 17:17:16.205864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.205879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.937 [2024-12-06 17:17:16.205885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.205900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.937 [2024-12-06 17:17:16.205905] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.205915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.205923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.205927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.205934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.937 [2024-12-06 17:17:16.205960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e600, cid 0, qid 0 00:17:33.937 [2024-12-06 17:17:16.205973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e780, cid 1, qid 0 00:17:33.937 [2024-12-06 17:17:16.205981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162e900, cid 2, qid 0 00:17:33.937 [2024-12-06 17:17:16.205986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.937 [2024-12-06 17:17:16.205991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ec00, cid 4, qid 0 00:17:33.937 [2024-12-06 17:17:16.206083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.937 [2024-12-06 17:17:16.206100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.937 [2024-12-06 17:17:16.206105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ec00) on tqpair=0x15edd90 00:17:33.937 [2024-12-06 17:17:16.206116] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:33.937 [2024-12-06 17:17:16.206127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206153] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.206171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:33.937 [2024-12-06 17:17:16.206194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ec00, cid 4, qid 0 00:17:33.937 [2024-12-06 17:17:16.206248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.937 [2024-12-06 17:17:16.206256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.937 [2024-12-06 17:17:16.206260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ec00) on tqpair=0x15edd90 00:17:33.937 [2024-12-06 17:17:16.206362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206397] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.206411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.937 [2024-12-06 17:17:16.206436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ec00, cid 4, qid 0 00:17:33.937 [2024-12-06 17:17:16.206522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.937 [2024-12-06 17:17:16.206530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.937 [2024-12-06 17:17:16.206534] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206538] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15edd90): datao=0, datal=4096, cccid=4 00:17:33.937 [2024-12-06 17:17:16.206543] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ec00) on tqpair(0x15edd90): expected_datao=0, payload_size=4096 00:17:33.937 [2024-12-06 17:17:16.206549] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206556] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206561] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.937 [2024-12-06 17:17:16.206576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.937 [2024-12-06 17:17:16.206580] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ec00) on tqpair=0x15edd90 00:17:33.937 [2024-12-06 17:17:16.206603] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:33.937 [2024-12-06 17:17:16.206619] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.206653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.937 [2024-12-06 17:17:16.206676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ec00, cid 4, qid 0 00:17:33.937 [2024-12-06 17:17:16.206764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.937 [2024-12-06 17:17:16.206777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.937 [2024-12-06 17:17:16.206782] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206786] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15edd90): datao=0, datal=4096, cccid=4 00:17:33.937 [2024-12-06 17:17:16.206791] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ec00) on tqpair(0x15edd90): expected_datao=0, payload_size=4096 00:17:33.937 [2024-12-06 17:17:16.206796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206804] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206808] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.937 [2024-12-06 17:17:16.206823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.937 [2024-12-06 17:17:16.206827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ec00) on tqpair=0x15edd90 00:17:33.937 [2024-12-06 17:17:16.206849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.206872] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206877] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.206885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.937 [2024-12-06 17:17:16.206908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ec00, cid 4, qid 0 00:17:33.937 [2024-12-06 17:17:16.206976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.937 [2024-12-06 17:17:16.206983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.937 [2024-12-06 17:17:16.206987] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.206991] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15edd90): datao=0, datal=4096, cccid=4 00:17:33.937 [2024-12-06 17:17:16.206996] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ec00) on tqpair(0x15edd90): expected_datao=0, payload_size=4096 00:17:33.937 [2024-12-06 17:17:16.207001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.207008] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.207013] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.207021] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.937 [2024-12-06 17:17:16.207028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.937 [2024-12-06 17:17:16.207031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.207036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ec00) on tqpair=0x15edd90 00:17:33.937 [2024-12-06 17:17:16.207046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.207056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.207067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.207075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.207080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.207086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.207092] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:33.937 [2024-12-06 17:17:16.207097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:33.937 [2024-12-06 17:17:16.207103] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:33.937 [2024-12-06 17:17:16.207122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.937 [2024-12-06 17:17:16.207127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15edd90) 00:17:33.937 [2024-12-06 17:17:16.207135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.937 [2024-12-06 17:17:16.207143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15edd90) 00:17:33.938 [2024-12-06 17:17:16.207158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:33.938 [2024-12-06 17:17:16.207186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ec00, cid 4, qid 0 00:17:33.938 [2024-12-06 17:17:16.207193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed80, cid 5, qid 0 00:17:33.938 [2024-12-06 17:17:16.207279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.207288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.207292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ec00) on tqpair=0x15edd90 00:17:33.938 [2024-12-06 17:17:16.207305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.207311] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.207315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed80) on tqpair=0x15edd90 00:17:33.938 [2024-12-06 17:17:16.207331] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207336] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15edd90) 00:17:33.938 [2024-12-06 17:17:16.207344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.938 [2024-12-06 17:17:16.207367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed80, cid 5, qid 0 00:17:33.938 [2024-12-06 17:17:16.207428] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.207435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.207439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed80) on tqpair=0x15edd90 00:17:33.938 [2024-12-06 17:17:16.207455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15edd90) 00:17:33.938 [2024-12-06 17:17:16.207467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.938 [2024-12-06 17:17:16.207486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed80, cid 5, qid 0 00:17:33.938 [2024-12-06 17:17:16.207544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.207551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.207555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed80) on tqpair=0x15edd90 00:17:33.938 [2024-12-06 17:17:16.207570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15edd90) 00:17:33.938 [2024-12-06 17:17:16.207583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.938 [2024-12-06 17:17:16.207601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed80, cid 5, qid 0 00:17:33.938 [2024-12-06 17:17:16.207653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.207666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.207670] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed80) on tqpair=0x15edd90 00:17:33.938 [2024-12-06 17:17:16.207696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x15edd90) 00:17:33.938 [2024-12-06 17:17:16.207711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.938 [2024-12-06 17:17:16.207719] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x15edd90) 00:17:33.938 [2024-12-06 17:17:16.207731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.938 [2024-12-06 17:17:16.207739] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x15edd90) 00:17:33.938 [2024-12-06 17:17:16.207750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.938 [2024-12-06 17:17:16.207762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15edd90) 00:17:33.938 [2024-12-06 17:17:16.207774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.938 [2024-12-06 17:17:16.207797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ed80, cid 5, qid 0 00:17:33.938 [2024-12-06 17:17:16.207805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ec00, cid 4, qid 0 00:17:33.938 [2024-12-06 17:17:16.207810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ef00, cid 6, qid 0 00:17:33.938 [2024-12-06 17:17:16.207815] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162f080, cid 7, qid 0 00:17:33.938 [2024-12-06 17:17:16.207958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.938 [2024-12-06 17:17:16.207974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.938 [2024-12-06 17:17:16.207979] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.207983] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15edd90): datao=0, datal=8192, cccid=5 00:17:33.938 [2024-12-06 17:17:16.207988] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ed80) on tqpair(0x15edd90): expected_datao=0, payload_size=8192 00:17:33.938 [2024-12-06 17:17:16.207993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208011] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208017] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.938 [2024-12-06 17:17:16.208029] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.938 [2024-12-06 17:17:16.208033] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208037] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15edd90): datao=0, datal=512, cccid=4 00:17:33.938 [2024-12-06 17:17:16.208041] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ec00) on tqpair(0x15edd90): expected_datao=0, payload_size=512 00:17:33.938 [2024-12-06 17:17:16.208046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208053] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208057] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.938 [2024-12-06 17:17:16.208069] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.938 [2024-12-06 17:17:16.208072] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208076] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15edd90): datao=0, datal=512, cccid=6 00:17:33.938 [2024-12-06 17:17:16.208081] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162ef00) on tqpair(0x15edd90): expected_datao=0, payload_size=512 00:17:33.938 [2024-12-06 17:17:16.208085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208092] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208096] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:33.938 [2024-12-06 17:17:16.208108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:33.938 [2024-12-06 17:17:16.208112] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208116] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x15edd90): datao=0, datal=4096, cccid=7 00:17:33.938 [2024-12-06 17:17:16.208121] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x162f080) on tqpair(0x15edd90): expected_datao=0, payload_size=4096 00:17:33.938 [2024-12-06 17:17:16.208125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208132] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208136] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.208148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.208152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ed80) on tqpair=0x15edd90 00:17:33.938 [2024-12-06 17:17:16.208173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.208180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.208184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ec00) on tqpair=0x15edd90 00:17:33.938 [2024-12-06 17:17:16.208201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.208208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.208213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ef00) on tqpair=0x15edd90 00:17:33.938 [2024-12-06 17:17:16.208225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.938 [2024-12-06 17:17:16.208231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.938 [2024-12-06 17:17:16.208235] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.938 [2024-12-06 17:17:16.208239] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162f080) on tqpair=0x15edd90 00:17:33.938 ===================================================== 00:17:33.938 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:33.938 ===================================================== 00:17:33.938 Controller Capabilities/Features 00:17:33.938 ================================ 00:17:33.938 Vendor ID: 8086 00:17:33.938 Subsystem Vendor ID: 8086 00:17:33.938 Serial Number: SPDK00000000000001 00:17:33.938 Model Number: SPDK bdev Controller 00:17:33.938 Firmware Version: 25.01 00:17:33.938 Recommended Arb Burst: 6 00:17:33.938 IEEE OUI Identifier: e4 d2 5c 00:17:33.938 Multi-path I/O 00:17:33.938 May have multiple subsystem ports: Yes 00:17:33.938 May have multiple controllers: Yes 00:17:33.938 Associated with SR-IOV VF: No 00:17:33.938 Max Data Transfer Size: 131072 00:17:33.938 Max Number of Namespaces: 32 00:17:33.938 Max Number of I/O Queues: 127 00:17:33.938 NVMe Specification Version (VS): 1.3 00:17:33.938 NVMe Specification Version (Identify): 1.3 00:17:33.938 Maximum Queue Entries: 128 00:17:33.938 Contiguous Queues Required: Yes 00:17:33.938 Arbitration Mechanisms Supported 00:17:33.938 Weighted Round Robin: Not Supported 00:17:33.938 Vendor Specific: Not Supported 00:17:33.938 Reset Timeout: 15000 ms 00:17:33.938 Doorbell Stride: 4 bytes 00:17:33.938 NVM Subsystem Reset: Not Supported 00:17:33.938 Command Sets Supported 00:17:33.938 NVM Command Set: Supported 00:17:33.938 Boot Partition: Not Supported 00:17:33.938 Memory Page Size Minimum: 4096 bytes 00:17:33.938 Memory Page Size Maximum: 4096 bytes 00:17:33.938 Persistent Memory Region: Not Supported 00:17:33.938 Optional Asynchronous Events Supported 00:17:33.938 Namespace Attribute Notices: Supported 00:17:33.938 Firmware Activation Notices: Not Supported 00:17:33.938 ANA Change Notices: Not Supported 00:17:33.938 PLE Aggregate Log Change Notices: Not Supported 00:17:33.938 LBA Status Info Alert Notices: Not Supported 00:17:33.938 EGE Aggregate Log Change Notices: Not Supported 00:17:33.938 Normal NVM Subsystem Shutdown event: Not Supported 00:17:33.938 Zone Descriptor Change Notices: Not Supported 00:17:33.938 Discovery Log Change Notices: Not Supported 00:17:33.938 Controller Attributes 00:17:33.938 128-bit Host Identifier: Supported 00:17:33.938 Non-Operational Permissive Mode: Not Supported 00:17:33.938 NVM Sets: Not Supported 00:17:33.938 Read Recovery Levels: Not Supported 00:17:33.938 Endurance Groups: Not Supported 00:17:33.938 Predictable Latency Mode: Not Supported 00:17:33.938 Traffic Based Keep ALive: Not Supported 00:17:33.938 Namespace Granularity: Not Supported 00:17:33.938 SQ Associations: Not Supported 00:17:33.938 UUID List: Not Supported 00:17:33.938 Multi-Domain Subsystem: Not Supported 00:17:33.938 Fixed Capacity Management: Not Supported 00:17:33.938 Variable Capacity Management: Not Supported 00:17:33.938 Delete Endurance Group: Not Supported 00:17:33.938 Delete NVM Set: Not Supported 00:17:33.938 Extended LBA Formats Supported: Not Supported 00:17:33.938 Flexible Data Placement Supported: Not Supported 00:17:33.938 00:17:33.938 Controller Memory Buffer Support 00:17:33.938 ================================ 00:17:33.938 Supported: No 00:17:33.938 00:17:33.938 Persistent Memory Region Support 00:17:33.938 ================================ 00:17:33.938 Supported: No 00:17:33.938 00:17:33.938 Admin Command Set Attributes 00:17:33.938 ============================ 00:17:33.938 Security Send/Receive: Not Supported 00:17:33.938 Format NVM: Not Supported 00:17:33.938 Firmware Activate/Download: Not Supported 00:17:33.938 Namespace Management: Not Supported 00:17:33.938 Device Self-Test: Not Supported 00:17:33.938 Directives: Not Supported 00:17:33.938 NVMe-MI: Not Supported 00:17:33.938 Virtualization Management: Not Supported 00:17:33.938 Doorbell Buffer Config: Not Supported 00:17:33.938 Get LBA Status Capability: Not Supported 00:17:33.938 Command & Feature Lockdown Capability: Not Supported 00:17:33.938 Abort Command Limit: 4 00:17:33.938 Async Event Request Limit: 4 00:17:33.938 Number of Firmware Slots: N/A 00:17:33.938 Firmware Slot 1 Read-Only: N/A 00:17:33.938 Firmware Activation Without Reset: N/A 00:17:33.938 Multiple Update Detection Support: N/A 00:17:33.938 Firmware Update Granularity: No Information Provided 00:17:33.938 Per-Namespace SMART Log: No 00:17:33.938 Asymmetric Namespace Access Log Page: Not Supported 00:17:33.938 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:33.938 Command Effects Log Page: Supported 00:17:33.938 Get Log Page Extended Data: Supported 00:17:33.938 Telemetry Log Pages: Not Supported 00:17:33.938 Persistent Event Log Pages: Not Supported 00:17:33.938 Supported Log Pages Log Page: May Support 00:17:33.938 Commands Supported & Effects Log Page: Not Supported 00:17:33.938 Feature Identifiers & Effects Log Page:May Support 00:17:33.938 NVMe-MI Commands & Effects Log Page: May Support 00:17:33.938 Data Area 4 for Telemetry Log: Not Supported 00:17:33.938 Error Log Page Entries Supported: 128 00:17:33.938 Keep Alive: Supported 00:17:33.938 Keep Alive Granularity: 10000 ms 00:17:33.938 00:17:33.938 NVM Command Set Attributes 00:17:33.938 ========================== 00:17:33.938 Submission Queue Entry Size 00:17:33.938 Max: 64 00:17:33.938 Min: 64 00:17:33.938 Completion Queue Entry Size 00:17:33.938 Max: 16 00:17:33.938 Min: 16 00:17:33.938 Number of Namespaces: 32 00:17:33.938 Compare Command: Supported 00:17:33.938 Write Uncorrectable Command: Not Supported 00:17:33.938 Dataset Management Command: Supported 00:17:33.938 Write Zeroes Command: Supported 00:17:33.938 Set Features Save Field: Not Supported 00:17:33.939 Reservations: Supported 00:17:33.939 Timestamp: Not Supported 00:17:33.939 Copy: Supported 00:17:33.939 Volatile Write Cache: Present 00:17:33.939 Atomic Write Unit (Normal): 1 00:17:33.939 Atomic Write Unit (PFail): 1 00:17:33.939 Atomic Compare & Write Unit: 1 00:17:33.939 Fused Compare & Write: Supported 00:17:33.939 Scatter-Gather List 00:17:33.939 SGL Command Set: Supported 00:17:33.939 SGL Keyed: Supported 00:17:33.939 SGL Bit Bucket Descriptor: Not Supported 00:17:33.939 SGL Metadata Pointer: Not Supported 00:17:33.939 Oversized SGL: Not Supported 00:17:33.939 SGL Metadata Address: Not Supported 00:17:33.939 SGL Offset: Supported 00:17:33.939 Transport SGL Data Block: Not Supported 00:17:33.939 Replay Protected Memory Block: Not Supported 00:17:33.939 00:17:33.939 Firmware Slot Information 00:17:33.939 ========================= 00:17:33.939 Active slot: 1 00:17:33.939 Slot 1 Firmware Revision: 25.01 00:17:33.939 00:17:33.939 00:17:33.939 Commands Supported and Effects 00:17:33.939 ============================== 00:17:33.939 Admin Commands 00:17:33.939 -------------- 00:17:33.939 Get Log Page (02h): Supported 00:17:33.939 Identify (06h): Supported 00:17:33.939 Abort (08h): Supported 00:17:33.939 Set Features (09h): Supported 00:17:33.939 Get Features (0Ah): Supported 00:17:33.939 Asynchronous Event Request (0Ch): Supported 00:17:33.939 Keep Alive (18h): Supported 00:17:33.939 I/O Commands 00:17:33.939 ------------ 00:17:33.939 Flush (00h): Supported LBA-Change 00:17:33.939 Write (01h): Supported LBA-Change 00:17:33.939 Read (02h): Supported 00:17:33.939 Compare (05h): Supported 00:17:33.939 Write Zeroes (08h): Supported LBA-Change 00:17:33.939 Dataset Management (09h): Supported LBA-Change 00:17:33.939 Copy (19h): Supported LBA-Change 00:17:33.939 00:17:33.939 Error Log 00:17:33.939 ========= 00:17:33.939 00:17:33.939 Arbitration 00:17:33.939 =========== 00:17:33.939 Arbitration Burst: 1 00:17:33.939 00:17:33.939 Power Management 00:17:33.939 ================ 00:17:33.939 Number of Power States: 1 00:17:33.939 Current Power State: Power State #0 00:17:33.939 Power State #0: 00:17:33.939 Max Power: 0.00 W 00:17:33.939 Non-Operational State: Operational 00:17:33.939 Entry Latency: Not Reported 00:17:33.939 Exit Latency: Not Reported 00:17:33.939 Relative Read Throughput: 0 00:17:33.939 Relative Read Latency: 0 00:17:33.939 Relative Write Throughput: 0 00:17:33.939 Relative Write Latency: 0 00:17:33.939 Idle Power: Not Reported 00:17:33.939 Active Power: Not Reported 00:17:33.939 Non-Operational Permissive Mode: Not Supported 00:17:33.939 00:17:33.939 Health Information 00:17:33.939 ================== 00:17:33.939 Critical Warnings: 00:17:33.939 Available Spare Space: OK 00:17:33.939 Temperature: OK 00:17:33.939 Device Reliability: OK 00:17:33.939 Read Only: No 00:17:33.939 Volatile Memory Backup: OK 00:17:33.939 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:33.939 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:33.939 Available Spare: 0% 00:17:33.939 Available Spare Threshold: 0% 00:17:33.939 Life Percentage Used:[2024-12-06 17:17:16.212384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.212396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.212406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.212438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162f080, cid 7, qid 0 00:17:33.939 [2024-12-06 17:17:16.212519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.212528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.212532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.212537] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162f080) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.212579] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:33.939 [2024-12-06 17:17:16.212592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e600) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.212601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.939 [2024-12-06 17:17:16.212607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e780) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.212612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.939 [2024-12-06 17:17:16.212618] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162e900) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.212623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.939 [2024-12-06 17:17:16.212629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.212634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:33.939 [2024-12-06 17:17:16.212644] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.212649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.212653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.212664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.212692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.212750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.212763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.212770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.212777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.212790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.212798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.212804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.212817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.212863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.212935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.212950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.212958] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.212965] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.212975] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:33.939 [2024-12-06 17:17:16.212983] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:33.939 [2024-12-06 17:17:16.213005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213021] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.213035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.213074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.213139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.213160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.213169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.213205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.213235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.213293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.213345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.213353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.213358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.213374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213379] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.213392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.213414] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.213465] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.213473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.213476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.213492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.213509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.213529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.213579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.213587] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.213591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.213606] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.213622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.213641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.213694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.213708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.213715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.213736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.213753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.213775] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.213832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.213839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.213843] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.213866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.213884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.213907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.213963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.213971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.213977] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.213983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.213998] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.214015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.214036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.214089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.214096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.214100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.214115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.214132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.214150] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.214204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.214211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.214215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.214230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.214247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.214280] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.214333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.939 [2024-12-06 17:17:16.214352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.939 [2024-12-06 17:17:16.214359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.939 [2024-12-06 17:17:16.214375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.939 [2024-12-06 17:17:16.214384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.939 [2024-12-06 17:17:16.214392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.939 [2024-12-06 17:17:16.214415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.939 [2024-12-06 17:17:16.214480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.214488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.214492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.214507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.214523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.214542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.214602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.214614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.214619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214623] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.214635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.214653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.214674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.214726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.214743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.214748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.214764] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.214781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.214801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.214854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.214862] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.214865] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.214881] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.214897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.214916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.214968] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.214976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.214980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.214984] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.214995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.215082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.215089] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.215093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215097] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.215108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215143] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.215196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.215203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.215206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.215221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.215326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.215335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.215339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.215354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215391] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.215445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.215452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.215456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.215471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.215558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.215574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.215579] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.215594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215603] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215629] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.215681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.215688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.215692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.215707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215712] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215751] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.215808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.215815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.215819] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215823] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.215834] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215873] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.215925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.215933] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.215938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.215954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.215966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.215973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.215994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.216047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.216060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.216065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.216069] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.216081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.216086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.216090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.216098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.216118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.216172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.216180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.216184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.216188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.216199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.216203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.216207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.216215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.216233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.220285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.220302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.220307] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.220312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.220328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.220334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.220338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x15edd90) 00:17:33.940 [2024-12-06 17:17:16.220347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.940 [2024-12-06 17:17:16.220376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x162ea80, cid 3, qid 0 00:17:33.940 [2024-12-06 17:17:16.220442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:33.940 [2024-12-06 17:17:16.220450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:33.940 [2024-12-06 17:17:16.220454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:33.940 [2024-12-06 17:17:16.220458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x162ea80) on tqpair=0x15edd90 00:17:33.940 [2024-12-06 17:17:16.220467] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:34.199 0% 00:17:34.199 Data Units Read: 0 00:17:34.199 Data Units Written: 0 00:17:34.199 Host Read Commands: 0 00:17:34.199 Host Write Commands: 0 00:17:34.200 Controller Busy Time: 0 minutes 00:17:34.200 Power Cycles: 0 00:17:34.200 Power On Hours: 0 hours 00:17:34.200 Unsafe Shutdowns: 0 00:17:34.200 Unrecoverable Media Errors: 0 00:17:34.200 Lifetime Error Log Entries: 0 00:17:34.200 Warning Temperature Time: 0 minutes 00:17:34.200 Critical Temperature Time: 0 minutes 00:17:34.200 00:17:34.200 Number of Queues 00:17:34.200 ================ 00:17:34.200 Number of I/O Submission Queues: 127 00:17:34.200 Number of I/O Completion Queues: 127 00:17:34.200 00:17:34.200 Active Namespaces 00:17:34.200 ================= 00:17:34.200 Namespace ID:1 00:17:34.200 Error Recovery Timeout: Unlimited 00:17:34.200 Command Set Identifier: NVM (00h) 00:17:34.200 Deallocate: Supported 00:17:34.200 Deallocated/Unwritten Error: Not Supported 00:17:34.200 Deallocated Read Value: Unknown 00:17:34.200 Deallocate in Write Zeroes: Not Supported 00:17:34.200 Deallocated Guard Field: 0xFFFF 00:17:34.200 Flush: Supported 00:17:34.200 Reservation: Supported 00:17:34.200 Namespace Sharing Capabilities: Multiple Controllers 00:17:34.200 Size (in LBAs): 131072 (0GiB) 00:17:34.200 Capacity (in LBAs): 131072 (0GiB) 00:17:34.200 Utilization (in LBAs): 131072 (0GiB) 00:17:34.200 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:34.200 EUI64: ABCDEF0123456789 00:17:34.200 UUID: 2767bade-5bf0-4d65-a344-c24196d24b73 00:17:34.200 Thin Provisioning: Not Supported 00:17:34.200 Per-NS Atomic Units: Yes 00:17:34.200 Atomic Boundary Size (Normal): 0 00:17:34.200 Atomic Boundary Size (PFail): 0 00:17:34.200 Atomic Boundary Offset: 0 00:17:34.200 Maximum Single Source Range Length: 65535 00:17:34.200 Maximum Copy Length: 65535 00:17:34.200 Maximum Source Range Count: 1 00:17:34.200 NGUID/EUI64 Never Reused: No 00:17:34.200 Namespace Write Protected: No 00:17:34.200 Number of LBA Formats: 1 00:17:34.200 Current LBA Format: LBA Format #00 00:17:34.200 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:34.200 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:34.200 rmmod nvme_tcp 00:17:34.200 rmmod nvme_fabrics 00:17:34.200 rmmod nvme_keyring 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 86941 ']' 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 86941 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 86941 ']' 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 86941 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86941 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:34.200 killing process with pid 86941 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86941' 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 86941 00:17:34.200 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 86941 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.459 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.717 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:34.718 ************************************ 00:17:34.718 END TEST nvmf_identify 00:17:34.718 ************************************ 00:17:34.718 00:17:34.718 real 0m2.295s 00:17:34.718 user 0m4.737s 00:17:34.718 sys 0m0.701s 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:34.718 ************************************ 00:17:34.718 START TEST nvmf_perf 00:17:34.718 ************************************ 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:34.718 * Looking for test storage... 00:17:34.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:17:34.718 17:17:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:34.718 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:34.718 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.718 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.718 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.718 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:34.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.976 --rc genhtml_branch_coverage=1 00:17:34.976 --rc genhtml_function_coverage=1 00:17:34.976 --rc genhtml_legend=1 00:17:34.976 --rc geninfo_all_blocks=1 00:17:34.976 --rc geninfo_unexecuted_blocks=1 00:17:34.976 00:17:34.976 ' 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:34.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.976 --rc genhtml_branch_coverage=1 00:17:34.976 --rc genhtml_function_coverage=1 00:17:34.976 --rc genhtml_legend=1 00:17:34.976 --rc geninfo_all_blocks=1 00:17:34.976 --rc geninfo_unexecuted_blocks=1 00:17:34.976 00:17:34.976 ' 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:34.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.976 --rc genhtml_branch_coverage=1 00:17:34.976 --rc genhtml_function_coverage=1 00:17:34.976 --rc genhtml_legend=1 00:17:34.976 --rc geninfo_all_blocks=1 00:17:34.976 --rc geninfo_unexecuted_blocks=1 00:17:34.976 00:17:34.976 ' 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:34.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.976 --rc genhtml_branch_coverage=1 00:17:34.976 --rc genhtml_function_coverage=1 00:17:34.976 --rc genhtml_legend=1 00:17:34.976 --rc geninfo_all_blocks=1 00:17:34.976 --rc geninfo_unexecuted_blocks=1 00:17:34.976 00:17:34.976 ' 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:34.976 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.977 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:34.977 Cannot find device "nvmf_init_br" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:34.977 Cannot find device "nvmf_init_br2" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:34.977 Cannot find device "nvmf_tgt_br" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:34.977 Cannot find device "nvmf_tgt_br2" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:34.977 Cannot find device "nvmf_init_br" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:34.977 Cannot find device "nvmf_init_br2" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:34.977 Cannot find device "nvmf_tgt_br" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:34.977 Cannot find device "nvmf_tgt_br2" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:34.977 Cannot find device "nvmf_br" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:34.977 Cannot find device "nvmf_init_if" 00:17:34.977 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:34.978 Cannot find device "nvmf_init_if2" 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:34.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:34.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:34.978 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:35.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:35.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:17:35.237 00:17:35.237 --- 10.0.0.3 ping statistics --- 00:17:35.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.237 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:35.237 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:35.237 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.062 ms 00:17:35.237 00:17:35.237 --- 10.0.0.4 ping statistics --- 00:17:35.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.237 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:35.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:17:35.237 00:17:35.237 --- 10.0.0.1 ping statistics --- 00:17:35.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.237 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:35.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:17:35.237 00:17:35.237 --- 10.0.0.2 ping statistics --- 00:17:35.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.237 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=87202 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 87202 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 87202 ']' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.237 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:35.497 [2024-12-06 17:17:17.555403] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:35.497 [2024-12-06 17:17:17.555499] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.497 [2024-12-06 17:17:17.708460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:35.497 [2024-12-06 17:17:17.747558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.497 [2024-12-06 17:17:17.747629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.497 [2024-12-06 17:17:17.747644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.497 [2024-12-06 17:17:17.747655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.497 [2024-12-06 17:17:17.747663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.497 [2024-12-06 17:17:17.748594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.497 [2024-12-06 17:17:17.748700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.497 [2024-12-06 17:17:17.748761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:35.497 [2024-12-06 17:17:17.748764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.755 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.755 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:35.755 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.755 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.755 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:35.755 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.755 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:35.755 17:17:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:36.323 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:36.323 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:36.581 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:36.581 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:36.840 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:36.840 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:36.840 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:36.840 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:36.840 17:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:37.099 [2024-12-06 17:17:19.229512] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:37.099 17:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:37.357 17:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:37.357 17:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.615 17:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:37.615 17:17:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:37.873 17:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:38.131 [2024-12-06 17:17:20.338929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:38.131 17:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:38.389 17:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:38.389 17:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:38.389 17:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:38.389 17:17:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:39.763 Initializing NVMe Controllers 00:17:39.763 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:39.763 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:39.763 Initialization complete. Launching workers. 00:17:39.763 ======================================================== 00:17:39.763 Latency(us) 00:17:39.763 Device Information : IOPS MiB/s Average min max 00:17:39.763 PCIE (0000:00:10.0) NSID 1 from core 0: 23631.14 92.31 1353.80 322.73 8698.13 00:17:39.763 ======================================================== 00:17:39.763 Total : 23631.14 92.31 1353.80 322.73 8698.13 00:17:39.763 00:17:39.763 17:17:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:41.139 Initializing NVMe Controllers 00:17:41.139 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:41.139 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:41.139 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:41.139 Initialization complete. Launching workers. 00:17:41.139 ======================================================== 00:17:41.139 Latency(us) 00:17:41.139 Device Information : IOPS MiB/s Average min max 00:17:41.139 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3416.92 13.35 292.30 118.01 6164.38 00:17:41.139 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8128.44 5982.43 12033.64 00:17:41.139 ======================================================== 00:17:41.139 Total : 3540.92 13.83 566.70 118.01 12033.64 00:17:41.139 00:17:41.139 17:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:42.598 Initializing NVMe Controllers 00:17:42.598 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:42.598 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:42.598 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:42.598 Initialization complete. Launching workers. 00:17:42.598 ======================================================== 00:17:42.598 Latency(us) 00:17:42.598 Device Information : IOPS MiB/s Average min max 00:17:42.598 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8210.75 32.07 3916.60 549.03 41993.02 00:17:42.598 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2692.28 10.52 12003.52 6745.70 20204.75 00:17:42.598 ======================================================== 00:17:42.598 Total : 10903.02 42.59 5913.50 549.03 41993.02 00:17:42.598 00:17:42.598 17:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:42.598 17:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:45.127 Initializing NVMe Controllers 00:17:45.127 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.127 Controller IO queue size 128, less than required. 00:17:45.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:45.127 Controller IO queue size 128, less than required. 00:17:45.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:45.127 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:45.127 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:45.127 Initialization complete. Launching workers. 00:17:45.127 ======================================================== 00:17:45.127 Latency(us) 00:17:45.127 Device Information : IOPS MiB/s Average min max 00:17:45.127 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1600.45 400.11 81125.55 54231.96 190388.14 00:17:45.127 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 575.62 143.91 232864.48 120063.83 350412.90 00:17:45.127 ======================================================== 00:17:45.127 Total : 2176.08 544.02 121264.07 54231.96 350412.90 00:17:45.127 00:17:45.127 17:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:17:45.127 Initializing NVMe Controllers 00:17:45.127 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:45.127 Controller IO queue size 128, less than required. 00:17:45.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:45.127 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:45.127 Controller IO queue size 128, less than required. 00:17:45.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:45.127 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:17:45.127 WARNING: Some requested NVMe devices were skipped 00:17:45.127 No valid NVMe controllers or AIO or URING devices found 00:17:45.127 17:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:17:47.657 Initializing NVMe Controllers 00:17:47.657 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:47.657 Controller IO queue size 128, less than required. 00:17:47.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:47.657 Controller IO queue size 128, less than required. 00:17:47.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:47.657 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:47.657 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:47.657 Initialization complete. Launching workers. 00:17:47.657 00:17:47.657 ==================== 00:17:47.657 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:47.657 TCP transport: 00:17:47.657 polls: 10526 00:17:47.657 idle_polls: 7046 00:17:47.657 sock_completions: 3480 00:17:47.657 nvme_completions: 5945 00:17:47.657 submitted_requests: 8854 00:17:47.657 queued_requests: 1 00:17:47.657 00:17:47.657 ==================== 00:17:47.657 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:47.657 TCP transport: 00:17:47.657 polls: 10847 00:17:47.657 idle_polls: 7605 00:17:47.657 sock_completions: 3242 00:17:47.657 nvme_completions: 6397 00:17:47.657 submitted_requests: 9608 00:17:47.657 queued_requests: 1 00:17:47.657 ======================================================== 00:17:47.657 Latency(us) 00:17:47.657 Device Information : IOPS MiB/s Average min max 00:17:47.657 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1485.98 371.49 87828.81 60302.76 145453.00 00:17:47.657 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1598.97 399.74 81005.81 33442.24 128670.34 00:17:47.657 ======================================================== 00:17:47.657 Total : 3084.95 771.24 84292.35 33442.24 145453.00 00:17:47.657 00:17:47.657 17:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:47.915 17:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:48.172 rmmod nvme_tcp 00:17:48.172 rmmod nvme_fabrics 00:17:48.172 rmmod nvme_keyring 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 87202 ']' 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 87202 00:17:48.172 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 87202 ']' 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 87202 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87202 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.173 killing process with pid 87202 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87202' 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 87202 00:17:48.173 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 87202 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:48.738 17:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:48.738 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:48.738 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:48.738 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:48.738 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:17:48.997 00:17:48.997 real 0m14.344s 00:17:48.997 user 0m52.043s 00:17:48.997 sys 0m3.584s 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:48.997 ************************************ 00:17:48.997 END TEST nvmf_perf 00:17:48.997 ************************************ 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:48.997 ************************************ 00:17:48.997 START TEST nvmf_fio_host 00:17:48.997 ************************************ 00:17:48.997 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:49.255 * Looking for test storage... 00:17:49.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:49.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.255 --rc genhtml_branch_coverage=1 00:17:49.255 --rc genhtml_function_coverage=1 00:17:49.255 --rc genhtml_legend=1 00:17:49.255 --rc geninfo_all_blocks=1 00:17:49.255 --rc geninfo_unexecuted_blocks=1 00:17:49.255 00:17:49.255 ' 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:49.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.255 --rc genhtml_branch_coverage=1 00:17:49.255 --rc genhtml_function_coverage=1 00:17:49.255 --rc genhtml_legend=1 00:17:49.255 --rc geninfo_all_blocks=1 00:17:49.255 --rc geninfo_unexecuted_blocks=1 00:17:49.255 00:17:49.255 ' 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:49.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.255 --rc genhtml_branch_coverage=1 00:17:49.255 --rc genhtml_function_coverage=1 00:17:49.255 --rc genhtml_legend=1 00:17:49.255 --rc geninfo_all_blocks=1 00:17:49.255 --rc geninfo_unexecuted_blocks=1 00:17:49.255 00:17:49.255 ' 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:49.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.255 --rc genhtml_branch_coverage=1 00:17:49.255 --rc genhtml_function_coverage=1 00:17:49.255 --rc genhtml_legend=1 00:17:49.255 --rc geninfo_all_blocks=1 00:17:49.255 --rc geninfo_unexecuted_blocks=1 00:17:49.255 00:17:49.255 ' 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.255 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:49.256 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:49.256 Cannot find device "nvmf_init_br" 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:49.256 Cannot find device "nvmf_init_br2" 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:49.256 Cannot find device "nvmf_tgt_br" 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:49.256 Cannot find device "nvmf_tgt_br2" 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:49.256 Cannot find device "nvmf_init_br" 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:49.256 Cannot find device "nvmf_init_br2" 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:49.256 Cannot find device "nvmf_tgt_br" 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:17:49.256 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:49.513 Cannot find device "nvmf_tgt_br2" 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:49.513 Cannot find device "nvmf_br" 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:49.513 Cannot find device "nvmf_init_if" 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:49.513 Cannot find device "nvmf_init_if2" 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:49.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:49.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:49.513 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:49.514 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:49.514 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:49.514 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:49.514 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:49.514 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:49.514 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:49.514 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:49.771 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:49.771 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:17:49.771 00:17:49.771 --- 10.0.0.3 ping statistics --- 00:17:49.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.771 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:49.771 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:49.771 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:49.771 00:17:49.771 --- 10.0.0.4 ping statistics --- 00:17:49.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.771 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:49.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:17:49.771 00:17:49.771 --- 10.0.0.1 ping statistics --- 00:17:49.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.771 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:49.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:17:49.771 00:17:49.771 --- 10.0.0.2 ping statistics --- 00:17:49.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.771 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:49.771 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87725 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87725 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 87725 ']' 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.772 17:17:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:49.772 [2024-12-06 17:17:31.953849] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:49.772 [2024-12-06 17:17:31.953957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.029 [2024-12-06 17:17:32.107339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:50.029 [2024-12-06 17:17:32.147101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.029 [2024-12-06 17:17:32.147367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.029 [2024-12-06 17:17:32.147596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.029 [2024-12-06 17:17:32.147738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.029 [2024-12-06 17:17:32.147880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.029 [2024-12-06 17:17:32.148844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.029 [2024-12-06 17:17:32.148984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.029 [2024-12-06 17:17:32.149059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.029 [2024-12-06 17:17:32.149057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.029 17:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.029 17:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:50.029 17:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:50.287 [2024-12-06 17:17:32.535730] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.287 17:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:50.287 17:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.287 17:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.546 17:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:50.804 Malloc1 00:17:50.804 17:17:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.063 17:17:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:51.322 17:17:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:51.581 [2024-12-06 17:17:33.826314] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:51.581 17:17:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:52.164 17:17:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:17:52.164 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:52.164 fio-3.35 00:17:52.164 Starting 1 thread 00:17:54.697 00:17:54.697 test: (groupid=0, jobs=1): err= 0: pid=87844: Fri Dec 6 17:17:36 2024 00:17:54.697 read: IOPS=8866, BW=34.6MiB/s (36.3MB/s)(69.5MiB/2007msec) 00:17:54.697 slat (usec): min=2, max=366, avg= 2.70, stdev= 3.75 00:17:54.697 clat (usec): min=4217, max=13187, avg=7546.95, stdev=606.04 00:17:54.697 lat (usec): min=4273, max=13189, avg=7549.66, stdev=606.09 00:17:54.697 clat percentiles (usec): 00:17:54.697 | 1.00th=[ 6194], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7111], 00:17:54.697 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7504], 60.00th=[ 7635], 00:17:54.697 | 70.00th=[ 7767], 80.00th=[ 7898], 90.00th=[ 8160], 95.00th=[ 8356], 00:17:54.697 | 99.00th=[ 9110], 99.50th=[10421], 99.90th=[12256], 99.95th=[12911], 00:17:54.697 | 99.99th=[13042] 00:17:54.697 bw ( KiB/s): min=34744, max=36008, per=99.99%, avg=35466.00, stdev=528.10, samples=4 00:17:54.697 iops : min= 8686, max= 9002, avg=8866.50, stdev=132.02, samples=4 00:17:54.697 write: IOPS=8881, BW=34.7MiB/s (36.4MB/s)(69.6MiB/2007msec); 0 zone resets 00:17:54.697 slat (usec): min=2, max=361, avg= 2.76, stdev= 2.94 00:17:54.697 clat (usec): min=2961, max=13210, avg=6818.18, stdev=548.61 00:17:54.697 lat (usec): min=2978, max=13212, avg=6820.94, stdev=548.58 00:17:54.697 clat percentiles (usec): 00:17:54.697 | 1.00th=[ 5014], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:17:54.697 | 30.00th=[ 6652], 40.00th=[ 6718], 50.00th=[ 6849], 60.00th=[ 6915], 00:17:54.697 | 70.00th=[ 7046], 80.00th=[ 7177], 90.00th=[ 7373], 95.00th=[ 7504], 00:17:54.697 | 99.00th=[ 7963], 99.50th=[ 9110], 99.90th=[11600], 99.95th=[12387], 00:17:54.697 | 99.99th=[13173] 00:17:54.697 bw ( KiB/s): min=35408, max=35648, per=99.99%, avg=35522.00, stdev=98.28, samples=4 00:17:54.697 iops : min= 8852, max= 8912, avg=8880.50, stdev=24.57, samples=4 00:17:54.697 lat (msec) : 4=0.11%, 10=99.47%, 20=0.42% 00:17:54.697 cpu : usr=68.59%, sys=22.58%, ctx=29, majf=0, minf=7 00:17:54.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:54.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.697 issued rwts: total=17796,17825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.697 00:17:54.697 Run status group 0 (all jobs): 00:17:54.697 READ: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.5MiB (72.9MB), run=2007-2007msec 00:17:54.698 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.6MiB (73.0MB), run=2007-2007msec 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:17:54.698 17:17:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:17:54.698 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:54.698 fio-3.35 00:17:54.698 Starting 1 thread 00:17:57.232 00:17:57.232 test: (groupid=0, jobs=1): err= 0: pid=87887: Fri Dec 6 17:17:39 2024 00:17:57.232 read: IOPS=8007, BW=125MiB/s (131MB/s)(251MiB/2005msec) 00:17:57.232 slat (usec): min=3, max=130, avg= 3.92, stdev= 1.78 00:17:57.232 clat (usec): min=2728, max=17389, avg=9307.41, stdev=2219.52 00:17:57.232 lat (usec): min=2732, max=17393, avg=9311.32, stdev=2219.55 00:17:57.232 clat percentiles (usec): 00:17:57.232 | 1.00th=[ 4948], 5.00th=[ 5866], 10.00th=[ 6390], 20.00th=[ 7242], 00:17:57.232 | 30.00th=[ 7963], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10028], 00:17:57.232 | 70.00th=[10683], 80.00th=[11338], 90.00th=[11863], 95.00th=[12780], 00:17:57.232 | 99.00th=[15008], 99.50th=[15926], 99.90th=[17171], 99.95th=[17433], 00:17:57.232 | 99.99th=[17433] 00:17:57.232 bw ( KiB/s): min=58624, max=74144, per=51.93%, avg=66540.50, stdev=7244.80, samples=4 00:17:57.232 iops : min= 3664, max= 4634, avg=4158.75, stdev=452.77, samples=4 00:17:57.232 write: IOPS=4838, BW=75.6MiB/s (79.3MB/s)(136MiB/1804msec); 0 zone resets 00:17:57.232 slat (usec): min=37, max=351, avg=39.99, stdev= 7.12 00:17:57.232 clat (usec): min=4047, max=18221, avg=11534.15, stdev=2011.75 00:17:57.232 lat (usec): min=4085, max=18260, avg=11574.14, stdev=2012.17 00:17:57.232 clat percentiles (usec): 00:17:57.232 | 1.00th=[ 7373], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:17:57.232 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11994], 00:17:57.232 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14222], 95.00th=[15008], 00:17:57.232 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17695], 99.95th=[17957], 00:17:57.232 | 99.99th=[18220] 00:17:57.232 bw ( KiB/s): min=62688, max=76896, per=89.39%, avg=69195.25, stdev=6986.53, samples=4 00:17:57.232 iops : min= 3918, max= 4806, avg=4324.50, stdev=436.50, samples=4 00:17:57.232 lat (msec) : 4=0.13%, 10=47.20%, 20=52.67% 00:17:57.232 cpu : usr=76.65%, sys=14.97%, ctx=5, majf=0, minf=18 00:17:57.232 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:17:57.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:57.232 issued rwts: total=16056,8728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.232 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:57.232 00:17:57.232 Run status group 0 (all jobs): 00:17:57.232 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2005-2005msec 00:17:57.232 WRITE: bw=75.6MiB/s (79.3MB/s), 75.6MiB/s-75.6MiB/s (79.3MB/s-79.3MB/s), io=136MiB (143MB), run=1804-1804msec 00:17:57.232 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.232 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:57.232 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:57.232 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:57.232 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:57.232 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:57.232 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.492 rmmod nvme_tcp 00:17:57.492 rmmod nvme_fabrics 00:17:57.492 rmmod nvme_keyring 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 87725 ']' 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 87725 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 87725 ']' 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 87725 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87725 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.492 killing process with pid 87725 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87725' 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 87725 00:17:57.492 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 87725 00:17:57.493 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.493 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.493 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.752 17:17:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.752 17:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:17:57.752 00:17:57.752 real 0m8.773s 00:17:57.752 user 0m35.257s 00:17:57.752 sys 0m2.210s 00:17:57.752 17:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:57.752 17:17:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.752 ************************************ 00:17:57.752 END TEST nvmf_fio_host 00:17:57.752 ************************************ 00:17:58.011 17:17:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:58.011 17:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.011 17:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.011 17:17:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.011 ************************************ 00:17:58.011 START TEST nvmf_failover 00:17:58.011 ************************************ 00:17:58.011 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:17:58.012 * Looking for test storage... 00:17:58.012 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.012 --rc genhtml_branch_coverage=1 00:17:58.012 --rc genhtml_function_coverage=1 00:17:58.012 --rc genhtml_legend=1 00:17:58.012 --rc geninfo_all_blocks=1 00:17:58.012 --rc geninfo_unexecuted_blocks=1 00:17:58.012 00:17:58.012 ' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.012 --rc genhtml_branch_coverage=1 00:17:58.012 --rc genhtml_function_coverage=1 00:17:58.012 --rc genhtml_legend=1 00:17:58.012 --rc geninfo_all_blocks=1 00:17:58.012 --rc geninfo_unexecuted_blocks=1 00:17:58.012 00:17:58.012 ' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.012 --rc genhtml_branch_coverage=1 00:17:58.012 --rc genhtml_function_coverage=1 00:17:58.012 --rc genhtml_legend=1 00:17:58.012 --rc geninfo_all_blocks=1 00:17:58.012 --rc geninfo_unexecuted_blocks=1 00:17:58.012 00:17:58.012 ' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:58.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.012 --rc genhtml_branch_coverage=1 00:17:58.012 --rc genhtml_function_coverage=1 00:17:58.012 --rc genhtml_legend=1 00:17:58.012 --rc geninfo_all_blocks=1 00:17:58.012 --rc geninfo_unexecuted_blocks=1 00:17:58.012 00:17:58.012 ' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.012 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:58.012 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:58.013 Cannot find device "nvmf_init_br" 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:17:58.013 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:58.272 Cannot find device "nvmf_init_br2" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:58.272 Cannot find device "nvmf_tgt_br" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.272 Cannot find device "nvmf_tgt_br2" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:58.272 Cannot find device "nvmf_init_br" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:58.272 Cannot find device "nvmf_init_br2" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:58.272 Cannot find device "nvmf_tgt_br" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:58.272 Cannot find device "nvmf_tgt_br2" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:58.272 Cannot find device "nvmf_br" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:58.272 Cannot find device "nvmf_init_if" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:58.272 Cannot find device "nvmf_init_if2" 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.272 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.272 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:58.531 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.531 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:17:58.531 00:17:58.531 --- 10.0.0.3 ping statistics --- 00:17:58.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.531 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:58.531 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:58.531 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:17:58.531 00:17:58.531 --- 10.0.0.4 ping statistics --- 00:17:58.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.531 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:17:58.531 00:17:58.531 --- 10.0.0.1 ping statistics --- 00:17:58.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.531 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:58.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:17:58.531 00:17:58.531 --- 10.0.0.2 ping statistics --- 00:17:58.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.531 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=88169 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 88169 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88169 ']' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.531 17:17:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:58.531 [2024-12-06 17:17:40.727045] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:17:58.531 [2024-12-06 17:17:40.727121] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.790 [2024-12-06 17:17:40.871741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:58.790 [2024-12-06 17:17:40.905782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.790 [2024-12-06 17:17:40.905844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.790 [2024-12-06 17:17:40.905855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.790 [2024-12-06 17:17:40.905863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.790 [2024-12-06 17:17:40.905870] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.790 [2024-12-06 17:17:40.906626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.790 [2024-12-06 17:17:40.906747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.790 [2024-12-06 17:17:40.906752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.790 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.790 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:58.790 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:58.790 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:58.790 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:58.790 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.790 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:59.049 [2024-12-06 17:17:41.327879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.308 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:59.566 Malloc0 00:17:59.566 17:17:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:59.824 17:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.083 17:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:00.340 [2024-12-06 17:17:42.512660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:00.340 17:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:00.597 [2024-12-06 17:17:42.776830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:00.598 17:17:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:00.856 [2024-12-06 17:17:43.037083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=88271 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 88271 /var/tmp/bdevperf.sock 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88271 ']' 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.856 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:01.114 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.114 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:01.114 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:01.677 NVMe0n1 00:18:01.677 17:17:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:01.933 00:18:01.933 17:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=88301 00:18:01.933 17:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:01.933 17:17:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:02.927 17:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:03.185 [2024-12-06 17:17:45.362485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.185 [2024-12-06 17:17:45.362636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.362993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 [2024-12-06 17:17:45.363197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2523930 is same with the state(6) to be set 00:18:03.186 17:17:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:06.461 17:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:06.719 00:18:06.719 17:17:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:06.977 [2024-12-06 17:17:49.109476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.977 [2024-12-06 17:17:49.109528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.977 [2024-12-06 17:17:49.109541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.977 [2024-12-06 17:17:49.109550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.109999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 [2024-12-06 17:17:49.110149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2524290 is same with the state(6) to be set 00:18:06.978 17:17:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:10.261 17:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:10.261 [2024-12-06 17:17:52.457394] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:10.261 17:17:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:11.196 17:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:11.765 [2024-12-06 17:17:53.774737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.765 [2024-12-06 17:17:53.774779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.765 [2024-12-06 17:17:53.774791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.765 [2024-12-06 17:17:53.774800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.774997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 [2024-12-06 17:17:53.775592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x252ee50 is same with the state(6) to be set 00:18:11.766 17:17:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 88301 00:18:17.030 { 00:18:17.030 "results": [ 00:18:17.030 { 00:18:17.030 "job": "NVMe0n1", 00:18:17.030 "core_mask": "0x1", 00:18:17.030 "workload": "verify", 00:18:17.030 "status": "finished", 00:18:17.030 "verify_range": { 00:18:17.030 "start": 0, 00:18:17.030 "length": 16384 00:18:17.030 }, 00:18:17.030 "queue_depth": 128, 00:18:17.030 "io_size": 4096, 00:18:17.030 "runtime": 15.005141, 00:18:17.030 "iops": 8801.183541027705, 00:18:17.030 "mibps": 34.379623207139474, 00:18:17.030 "io_failed": 3357, 00:18:17.030 "io_timeout": 0, 00:18:17.030 "avg_latency_us": 14148.547919174018, 00:18:17.030 "min_latency_us": 889.949090909091, 00:18:17.030 "max_latency_us": 17754.298181818183 00:18:17.030 } 00:18:17.030 ], 00:18:17.030 "core_count": 1 00:18:17.030 } 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 88271 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88271 ']' 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88271 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88271 00:18:17.030 killing process with pid 88271 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88271' 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88271 00:18:17.030 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88271 00:18:17.294 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:17.294 [2024-12-06 17:17:43.130482] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:18:17.294 [2024-12-06 17:17:43.130649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88271 ] 00:18:17.294 [2024-12-06 17:17:43.285473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.294 [2024-12-06 17:17:43.323353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.294 Running I/O for 15 seconds... 00:18:17.294 8664.00 IOPS, 33.84 MiB/s [2024-12-06T17:17:59.592Z] [2024-12-06 17:17:45.365117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.294 [2024-12-06 17:17:45.365977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.294 [2024-12-06 17:17:45.365993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.366978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.366994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.367008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.367039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.367069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.367100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.367135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.367173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.367204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.295 [2024-12-06 17:17:45.367235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.295 [2024-12-06 17:17:45.367584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.295 [2024-12-06 17:17:45.367600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.367979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.367996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.296 [2024-12-06 17:17:45.368770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.296 [2024-12-06 17:17:45.368827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:18:17.296 [2024-12-06 17:17:45.368842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.296 [2024-12-06 17:17:45.368871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.296 [2024-12-06 17:17:45.368882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:18:17.296 [2024-12-06 17:17:45.368896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.296 [2024-12-06 17:17:45.368921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.296 [2024-12-06 17:17:45.368932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:18:17.296 [2024-12-06 17:17:45.368946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.368960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.296 [2024-12-06 17:17:45.368971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.296 [2024-12-06 17:17:45.368982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:18:17.296 [2024-12-06 17:17:45.368995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.369010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.296 [2024-12-06 17:17:45.369020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.296 [2024-12-06 17:17:45.369031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:18:17.296 [2024-12-06 17:17:45.369045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.296 [2024-12-06 17:17:45.369059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.296 [2024-12-06 17:17:45.369069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.296 [2024-12-06 17:17:45.369080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:18:17.296 [2024-12-06 17:17:45.369094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80952 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80960 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369379] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80968 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80976 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80984 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80992 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369578] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.297 [2024-12-06 17:17:45.369588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.297 [2024-12-06 17:17:45.369606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81000 len:8 PRP1 0x0 PRP2 0x0 00:18:17.297 [2024-12-06 17:17:45.369621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369673] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:17.297 [2024-12-06 17:17:45.369731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.297 [2024-12-06 17:17:45.369753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.297 [2024-12-06 17:17:45.369784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.297 [2024-12-06 17:17:45.369812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.297 [2024-12-06 17:17:45.369841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:45.369856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:17.297 [2024-12-06 17:17:45.373840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:17.297 [2024-12-06 17:17:45.373879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fea00 (9): Bad file descriptor 00:18:17.297 [2024-12-06 17:17:45.402420] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:17.297 8589.00 IOPS, 33.55 MiB/s [2024-12-06T17:17:59.595Z] 8720.33 IOPS, 34.06 MiB/s [2024-12-06T17:17:59.595Z] 8767.75 IOPS, 34.25 MiB/s [2024-12-06T17:17:59.595Z] [2024-12-06 17:17:49.110646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.110721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.110755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.110786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.110817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.110849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.110908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.110940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.110971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.110985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.297 [2024-12-06 17:17:49.111504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.297 [2024-12-06 17:17:49.111520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.111534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.111564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.111595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.111626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.111657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.111979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.111995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.298 [2024-12-06 17:17:49.112687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.112717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.112748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.112779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.112810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.112841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.112871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.112909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.298 [2024-12-06 17:17:49.112926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.298 [2024-12-06 17:17:49.112941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.112957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.112971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.112988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.299 [2024-12-06 17:17:49.113293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.113974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.113988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.299 [2024-12-06 17:17:49.114551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.299 [2024-12-06 17:17:49.114567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.300 [2024-12-06 17:17:49.114597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:49.114628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:49.114659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:49.114689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:49.114722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:49.114753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:49.114784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.300 [2024-12-06 17:17:49.114834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.300 [2024-12-06 17:17:49.114846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92256 len:8 PRP1 0x0 PRP2 0x0 00:18:17.300 [2024-12-06 17:17:49.114860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.114910] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:18:17.300 [2024-12-06 17:17:49.114967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.300 [2024-12-06 17:17:49.114990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.115006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.300 [2024-12-06 17:17:49.115020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.115036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.300 [2024-12-06 17:17:49.115061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.115078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.300 [2024-12-06 17:17:49.115092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:49.115106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:17.300 [2024-12-06 17:17:49.119081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:17.300 [2024-12-06 17:17:49.119122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fea00 (9): Bad file descriptor 00:18:17.300 [2024-12-06 17:17:49.149222] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:17.300 8720.60 IOPS, 34.06 MiB/s [2024-12-06T17:17:59.598Z] 8770.17 IOPS, 34.26 MiB/s [2024-12-06T17:17:59.598Z] 8790.86 IOPS, 34.34 MiB/s [2024-12-06T17:17:59.598Z] 8815.00 IOPS, 34.43 MiB/s [2024-12-06T17:17:59.598Z] 8804.44 IOPS, 34.39 MiB/s [2024-12-06T17:17:59.598Z] [2024-12-06 17:17:53.775664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.775739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.775772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:35784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.775804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.775834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.775865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.775896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:35816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.775927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.775957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.775997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:35848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:35920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.300 [2024-12-06 17:17:53.776781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.300 [2024-12-06 17:17:53.776795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.776819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.776835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.776851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.776865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.776882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.776896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.776912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.776927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.776943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.776957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.776974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.776988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.301 [2024-12-06 17:17:53.777429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.777982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.777997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.778028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.778066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.778098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.778129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.778159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.778190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.778221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.301 [2024-12-06 17:17:53.778252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.301 [2024-12-06 17:17:53.778282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.302 [2024-12-06 17:17:53.778755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.778979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.778993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.302 [2024-12-06 17:17:53.779837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.302 [2024-12-06 17:17:53.779889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.302 [2024-12-06 17:17:53.779901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36784 len:8 PRP1 0x0 PRP2 0x0 00:18:17.302 [2024-12-06 17:17:53.779915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.302 [2024-12-06 17:17:53.779967] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:18:17.302 [2024-12-06 17:17:53.780024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.303 [2024-12-06 17:17:53.780046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.303 [2024-12-06 17:17:53.780062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.303 [2024-12-06 17:17:53.780076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.303 [2024-12-06 17:17:53.780091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.303 [2024-12-06 17:17:53.780105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.303 [2024-12-06 17:17:53.780120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.303 [2024-12-06 17:17:53.780134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.303 [2024-12-06 17:17:53.780148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:17.303 [2024-12-06 17:17:53.780183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fea00 (9): Bad file descriptor 00:18:17.303 [2024-12-06 17:17:53.784167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:17.303 [2024-12-06 17:17:53.808630] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:17.303 8765.60 IOPS, 34.24 MiB/s [2024-12-06T17:17:59.601Z] 8775.91 IOPS, 34.28 MiB/s [2024-12-06T17:17:59.601Z] 8792.42 IOPS, 34.35 MiB/s [2024-12-06T17:17:59.601Z] 8794.46 IOPS, 34.35 MiB/s [2024-12-06T17:17:59.601Z] 8799.86 IOPS, 34.37 MiB/s 00:18:17.303 Latency(us) 00:18:17.303 [2024-12-06T17:17:59.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.303 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:17.303 Verification LBA range: start 0x0 length 0x4000 00:18:17.303 NVMe0n1 : 15.01 8801.18 34.38 223.72 0.00 14148.55 889.95 17754.30 00:18:17.303 [2024-12-06T17:17:59.601Z] =================================================================================================================== 00:18:17.303 [2024-12-06T17:17:59.601Z] Total : 8801.18 34.38 223.72 0.00 14148.55 889.95 17754.30 00:18:17.303 Received shutdown signal, test time was about 15.000000 seconds 00:18:17.303 00:18:17.303 Latency(us) 00:18:17.303 [2024-12-06T17:17:59.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.303 [2024-12-06T17:17:59.601Z] =================================================================================================================== 00:18:17.303 [2024-12-06T17:17:59.601Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=88507 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 88507 /var/tmp/bdevperf.sock 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 88507 ']' 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.303 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:17.561 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.561 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:17.561 17:17:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:17.819 [2024-12-06 17:18:00.051923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:17.819 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:18:18.077 [2024-12-06 17:18:00.364255] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:18:18.335 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:18.593 NVMe0n1 00:18:18.593 17:18:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:18.851 00:18:19.109 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:19.367 00:18:19.367 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:19.367 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:19.625 17:18:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:20.190 17:18:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:23.472 17:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:23.472 17:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:23.472 17:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88636 00:18:23.472 17:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:23.472 17:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88636 00:18:24.408 { 00:18:24.408 "results": [ 00:18:24.408 { 00:18:24.408 "job": "NVMe0n1", 00:18:24.408 "core_mask": "0x1", 00:18:24.408 "workload": "verify", 00:18:24.408 "status": "finished", 00:18:24.408 "verify_range": { 00:18:24.408 "start": 0, 00:18:24.408 "length": 16384 00:18:24.408 }, 00:18:24.408 "queue_depth": 128, 00:18:24.408 "io_size": 4096, 00:18:24.408 "runtime": 1.005698, 00:18:24.408 "iops": 9050.430646178078, 00:18:24.408 "mibps": 35.35324471163312, 00:18:24.408 "io_failed": 0, 00:18:24.408 "io_timeout": 0, 00:18:24.408 "avg_latency_us": 14069.008481652385, 00:18:24.408 "min_latency_us": 2457.6, 00:18:24.408 "max_latency_us": 14954.123636363636 00:18:24.408 } 00:18:24.408 ], 00:18:24.408 "core_count": 1 00:18:24.408 } 00:18:24.408 17:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:24.408 [2024-12-06 17:17:59.472687] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:18:24.408 [2024-12-06 17:17:59.472811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88507 ] 00:18:24.408 [2024-12-06 17:17:59.630058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.408 [2024-12-06 17:17:59.662618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.408 [2024-12-06 17:18:02.164694] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:18:24.408 [2024-12-06 17:18:02.164830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.408 [2024-12-06 17:18:02.164857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.408 [2024-12-06 17:18:02.164877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.408 [2024-12-06 17:18:02.164892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.408 [2024-12-06 17:18:02.164908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.408 [2024-12-06 17:18:02.164922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.408 [2024-12-06 17:18:02.164937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.408 [2024-12-06 17:18:02.164952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.408 [2024-12-06 17:18:02.164967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:24.408 [2024-12-06 17:18:02.165019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:24.408 [2024-12-06 17:18:02.165053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca0a00 (9): Bad file descriptor 00:18:24.408 [2024-12-06 17:18:02.172780] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:24.408 Running I/O for 1 seconds... 00:18:24.408 8974.00 IOPS, 35.05 MiB/s 00:18:24.408 Latency(us) 00:18:24.408 [2024-12-06T17:18:06.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.408 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:24.408 Verification LBA range: start 0x0 length 0x4000 00:18:24.408 NVMe0n1 : 1.01 9050.43 35.35 0.00 0.00 14069.01 2457.60 14954.12 00:18:24.408 [2024-12-06T17:18:06.706Z] =================================================================================================================== 00:18:24.408 [2024-12-06T17:18:06.706Z] Total : 9050.43 35.35 0.00 0.00 14069.01 2457.60 14954.12 00:18:24.408 17:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.408 17:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:25.048 17:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:25.048 17:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:25.048 17:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:25.331 17:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:25.898 17:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:29.184 17:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:29.184 17:18:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 88507 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88507 ']' 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88507 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88507 00:18:29.184 killing process with pid 88507 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88507' 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88507 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88507 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:29.184 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:29.442 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:29.442 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:29.442 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:29.442 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:29.442 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:29.701 rmmod nvme_tcp 00:18:29.701 rmmod nvme_fabrics 00:18:29.701 rmmod nvme_keyring 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 88169 ']' 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 88169 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 88169 ']' 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 88169 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88169 00:18:29.701 killing process with pid 88169 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88169' 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 88169 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 88169 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:29.701 17:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:18:29.960 00:18:29.960 real 0m32.152s 00:18:29.960 user 2m5.888s 00:18:29.960 sys 0m4.411s 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.960 17:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:29.960 ************************************ 00:18:29.960 END TEST nvmf_failover 00:18:29.960 ************************************ 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:30.220 ************************************ 00:18:30.220 START TEST nvmf_host_discovery 00:18:30.220 ************************************ 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:30.220 * Looking for test storage... 00:18:30.220 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.220 --rc genhtml_branch_coverage=1 00:18:30.220 --rc genhtml_function_coverage=1 00:18:30.220 --rc genhtml_legend=1 00:18:30.220 --rc geninfo_all_blocks=1 00:18:30.220 --rc geninfo_unexecuted_blocks=1 00:18:30.220 00:18:30.220 ' 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.220 --rc genhtml_branch_coverage=1 00:18:30.220 --rc genhtml_function_coverage=1 00:18:30.220 --rc genhtml_legend=1 00:18:30.220 --rc geninfo_all_blocks=1 00:18:30.220 --rc geninfo_unexecuted_blocks=1 00:18:30.220 00:18:30.220 ' 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.220 --rc genhtml_branch_coverage=1 00:18:30.220 --rc genhtml_function_coverage=1 00:18:30.220 --rc genhtml_legend=1 00:18:30.220 --rc geninfo_all_blocks=1 00:18:30.220 --rc geninfo_unexecuted_blocks=1 00:18:30.220 00:18:30.220 ' 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:30.220 --rc genhtml_branch_coverage=1 00:18:30.220 --rc genhtml_function_coverage=1 00:18:30.220 --rc genhtml_legend=1 00:18:30.220 --rc geninfo_all_blocks=1 00:18:30.220 --rc geninfo_unexecuted_blocks=1 00:18:30.220 00:18:30.220 ' 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.220 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:30.221 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:30.221 Cannot find device "nvmf_init_br" 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:30.221 Cannot find device "nvmf_init_br2" 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:30.221 Cannot find device "nvmf_tgt_br" 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:18:30.221 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:30.480 Cannot find device "nvmf_tgt_br2" 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:30.480 Cannot find device "nvmf_init_br" 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:30.480 Cannot find device "nvmf_init_br2" 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:30.480 Cannot find device "nvmf_tgt_br" 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:30.480 Cannot find device "nvmf_tgt_br2" 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:30.480 Cannot find device "nvmf_br" 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:30.480 Cannot find device "nvmf_init_if" 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:30.480 Cannot find device "nvmf_init_if2" 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:30.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:30.480 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:30.480 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:30.481 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:30.481 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:30.740 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:30.740 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:18:30.740 00:18:30.740 --- 10.0.0.3 ping statistics --- 00:18:30.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.740 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:30.740 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:30.740 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:18:30.740 00:18:30.740 --- 10.0.0.4 ping statistics --- 00:18:30.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.740 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:30.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:18:30.740 00:18:30.740 --- 10.0.0.1 ping statistics --- 00:18:30.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.740 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:30.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:18:30.740 00:18:30.740 --- 10.0.0.2 ping statistics --- 00:18:30.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.740 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:30.740 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=88995 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 88995 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88995 ']' 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.741 17:18:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:30.741 [2024-12-06 17:18:13.008818] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:18:30.741 [2024-12-06 17:18:13.008912] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.000 [2024-12-06 17:18:13.161396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.000 [2024-12-06 17:18:13.198946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.000 [2024-12-06 17:18:13.199009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.000 [2024-12-06 17:18:13.199034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.000 [2024-12-06 17:18:13.199044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.000 [2024-12-06 17:18:13.199053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.000 [2024-12-06 17:18:13.199422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.000 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.000 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:31.000 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.000 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.000 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.260 [2024-12-06 17:18:13.331087] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.260 [2024-12-06 17:18:13.339249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.260 null0 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.260 null1 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=89033 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 89033 /tmp/host.sock 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 89033 ']' 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.260 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.260 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.260 [2024-12-06 17:18:13.428735] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:18:31.260 [2024-12-06 17:18:13.428840] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89033 ] 00:18:31.519 [2024-12-06 17:18:13.578328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.519 [2024-12-06 17:18:13.617183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:31.519 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.520 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.778 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:31.779 17:18:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:31.779 [2024-12-06 17:18:14.067437] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:31.779 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:32.037 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:32.038 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.038 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:32.038 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:32.038 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.038 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:18:32.038 17:18:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:32.604 [2024-12-06 17:18:14.724521] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:32.604 [2024-12-06 17:18:14.724557] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:32.604 [2024-12-06 17:18:14.724578] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:32.604 [2024-12-06 17:18:14.810629] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:18:32.604 [2024-12-06 17:18:14.865025] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:18:32.604 [2024-12-06 17:18:14.865811] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2511580:1 started. 00:18:32.604 [2024-12-06 17:18:14.867679] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:32.604 [2024-12-06 17:18:14.867707] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:32.604 [2024-12-06 17:18:14.872849] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2511580 was disconnected and freed. delete nvme_qpair. 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:33.170 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.477 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:18:33.477 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:33.477 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:33.477 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.478 [2024-12-06 17:18:15.556702] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2511920:1 started. 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:33.478 [2024-12-06 17:18:15.562958] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2511920 was disconnected and freed. delete nvme_qpair. 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.478 [2024-12-06 17:18:15.664137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:33.478 [2024-12-06 17:18:15.664818] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:33.478 [2024-12-06 17:18:15.664843] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.478 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:33.479 [2024-12-06 17:18:15.750937] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:18:33.479 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.738 [2024-12-06 17:18:15.809407] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:18:33.738 [2024-12-06 17:18:15.809492] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:33.738 [2024-12-06 17:18:15.809503] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:18:33.738 [2024-12-06 17:18:15.809509] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:33.738 17:18:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.673 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.673 [2024-12-06 17:18:16.955249] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:18:34.673 [2024-12-06 17:18:16.955298] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:34.673 [2024-12-06 17:18:16.955382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.674 [2024-12-06 17:18:16.955416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.674 [2024-12-06 17:18:16.955430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.674 [2024-12-06 17:18:16.955440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.674 [2024-12-06 17:18:16.955450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.674 [2024-12-06 17:18:16.955460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.674 [2024-12-06 17:18:16.955470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.674 [2024-12-06 17:18:16.955479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.674 [2024-12-06 17:18:16.955490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489850 is same with the state(6) to be set 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.674 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:34.674 [2024-12-06 17:18:16.965323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489850 (9): Bad file descriptor 00:18:34.933 [2024-12-06 17:18:16.975340] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:34.933 [2024-12-06 17:18:16.975366] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:34.933 [2024-12-06 17:18:16.975373] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:34.933 [2024-12-06 17:18:16.975379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:34.933 [2024-12-06 17:18:16.975414] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:34.933 [2024-12-06 17:18:16.975499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-12-06 17:18:16.975520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489850 with addr=10.0.0.3, port=4420 00:18:34.933 [2024-12-06 17:18:16.975532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489850 is same with the state(6) to be set 00:18:34.933 [2024-12-06 17:18:16.975549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489850 (9): Bad file descriptor 00:18:34.933 [2024-12-06 17:18:16.975564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:34.933 [2024-12-06 17:18:16.975573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:34.933 [2024-12-06 17:18:16.975585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:34.933 [2024-12-06 17:18:16.975593] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:34.933 [2024-12-06 17:18:16.975600] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:34.933 [2024-12-06 17:18:16.975605] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:34.933 17:18:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.933 [2024-12-06 17:18:16.985423] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:34.933 [2024-12-06 17:18:16.985446] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:34.933 [2024-12-06 17:18:16.985453] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:34.933 [2024-12-06 17:18:16.985458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:34.933 [2024-12-06 17:18:16.985486] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:34.933 [2024-12-06 17:18:16.985541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-12-06 17:18:16.985561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489850 with addr=10.0.0.3, port=4420 00:18:34.933 [2024-12-06 17:18:16.985572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489850 is same with the state(6) to be set 00:18:34.933 [2024-12-06 17:18:16.985589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489850 (9): Bad file descriptor 00:18:34.933 [2024-12-06 17:18:16.985603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:34.933 [2024-12-06 17:18:16.985611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:34.933 [2024-12-06 17:18:16.985621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:34.933 [2024-12-06 17:18:16.985629] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:34.933 [2024-12-06 17:18:16.985635] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:34.933 [2024-12-06 17:18:16.985640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:34.933 [2024-12-06 17:18:16.995496] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:34.933 [2024-12-06 17:18:16.995517] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:34.933 [2024-12-06 17:18:16.995523] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:34.933 [2024-12-06 17:18:16.995528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:34.933 [2024-12-06 17:18:16.995557] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:34.933 [2024-12-06 17:18:16.995609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.933 [2024-12-06 17:18:16.995628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489850 with addr=10.0.0.3, port=4420 00:18:34.933 [2024-12-06 17:18:16.995650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489850 is same with the state(6) to be set 00:18:34.933 [2024-12-06 17:18:16.995666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489850 (9): Bad file descriptor 00:18:34.933 [2024-12-06 17:18:16.995681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:34.933 [2024-12-06 17:18:16.995690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:34.933 [2024-12-06 17:18:16.995699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:34.933 [2024-12-06 17:18:16.995707] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:34.933 [2024-12-06 17:18:16.995713] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:34.933 [2024-12-06 17:18:16.995718] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:34.933 [2024-12-06 17:18:17.005574] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:34.934 [2024-12-06 17:18:17.005624] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:34.934 [2024-12-06 17:18:17.005631] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:34.934 [2024-12-06 17:18:17.005638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:34.934 [2024-12-06 17:18:17.005679] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:34.934 [2024-12-06 17:18:17.005768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-12-06 17:18:17.005791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489850 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-12-06 17:18:17.005803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489850 is same with the state(6) to be set 00:18:34.934 [2024-12-06 17:18:17.005821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489850 (9): Bad file descriptor 00:18:34.934 [2024-12-06 17:18:17.005837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:34.934 [2024-12-06 17:18:17.005846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:34.934 [2024-12-06 17:18:17.005859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:34.934 [2024-12-06 17:18:17.005868] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:34.934 [2024-12-06 17:18:17.005874] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:34.934 [2024-12-06 17:18:17.005879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:34.934 [2024-12-06 17:18:17.015695] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:34.934 [2024-12-06 17:18:17.015719] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:34.934 [2024-12-06 17:18:17.015725] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:34.934 [2024-12-06 17:18:17.015732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:34.934 [2024-12-06 17:18:17.015766] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:34.934 [2024-12-06 17:18:17.015871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-12-06 17:18:17.015893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489850 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-12-06 17:18:17.015906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489850 is same with the state(6) to be set 00:18:34.934 [2024-12-06 17:18:17.015925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489850 (9): Bad file descriptor 00:18:34.934 [2024-12-06 17:18:17.015940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:34.934 [2024-12-06 17:18:17.015950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:34.934 [2024-12-06 17:18:17.015961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:34.934 [2024-12-06 17:18:17.015970] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:34.934 [2024-12-06 17:18:17.015977] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:34.934 [2024-12-06 17:18:17.015982] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.934 [2024-12-06 17:18:17.025781] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:34.934 [2024-12-06 17:18:17.025808] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:34.934 [2024-12-06 17:18:17.025815] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:34.934 [2024-12-06 17:18:17.025822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:34.934 [2024-12-06 17:18:17.025858] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:34.934 [2024-12-06 17:18:17.025941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-12-06 17:18:17.025963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489850 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-12-06 17:18:17.025975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489850 is same with the state(6) to be set 00:18:34.934 [2024-12-06 17:18:17.025992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489850 (9): Bad file descriptor 00:18:34.934 [2024-12-06 17:18:17.026008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:34.934 [2024-12-06 17:18:17.026018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:34.934 [2024-12-06 17:18:17.026028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:34.934 [2024-12-06 17:18:17.026037] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:34.934 [2024-12-06 17:18:17.026043] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:34.934 [2024-12-06 17:18:17.026049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:34.934 [2024-12-06 17:18:17.035872] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:18:34.934 [2024-12-06 17:18:17.035897] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:18:34.934 [2024-12-06 17:18:17.035904] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:18:34.934 [2024-12-06 17:18:17.035909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:18:34.934 [2024-12-06 17:18:17.035935] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:18:34.934 [2024-12-06 17:18:17.035991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:34.934 [2024-12-06 17:18:17.036011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489850 with addr=10.0.0.3, port=4420 00:18:34.934 [2024-12-06 17:18:17.036022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489850 is same with the state(6) to be set 00:18:34.934 [2024-12-06 17:18:17.036049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489850 (9): Bad file descriptor 00:18:34.934 [2024-12-06 17:18:17.036064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:18:34.934 [2024-12-06 17:18:17.036074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:18:34.934 [2024-12-06 17:18:17.036084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:18:34.934 [2024-12-06 17:18:17.036092] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:18:34.934 [2024-12-06 17:18:17.036098] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:18:34.934 [2024-12-06 17:18:17.036103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:18:34.934 [2024-12-06 17:18:17.042993] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:18:34.934 [2024-12-06 17:18:17.043024] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:34.934 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:34.935 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:35.193 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.194 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:35.194 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:35.194 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:18:35.194 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:18:35.194 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:35.194 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.194 17:18:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.126 [2024-12-06 17:18:18.374962] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:18:36.126 [2024-12-06 17:18:18.374997] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:18:36.126 [2024-12-06 17:18:18.375018] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:18:36.384 [2024-12-06 17:18:18.461084] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:18:36.384 [2024-12-06 17:18:18.519433] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:18:36.384 [2024-12-06 17:18:18.519986] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x250b8d0:1 started. 00:18:36.384 [2024-12-06 17:18:18.521951] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:18:36.384 [2024-12-06 17:18:18.521997] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:18:36.384 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.384 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:36.385 [2024-12-06 17:18:18.523753] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x250b8d0 was disconnected and freed. delete nvme_qpair. 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.385 2024/12/06 17:18:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:36.385 request: 00:18:36.385 { 00:18:36.385 "method": "bdev_nvme_start_discovery", 00:18:36.385 "params": { 00:18:36.385 "name": "nvme", 00:18:36.385 "trtype": "tcp", 00:18:36.385 "traddr": "10.0.0.3", 00:18:36.385 "adrfam": "ipv4", 00:18:36.385 "trsvcid": "8009", 00:18:36.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:36.385 "wait_for_attach": true 00:18:36.385 } 00:18:36.385 } 00:18:36.385 Got JSON-RPC error response 00:18:36.385 GoRPCClient: error on JSON-RPC call 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.385 2024/12/06 17:18:18 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:18:36.385 request: 00:18:36.385 { 00:18:36.385 "method": "bdev_nvme_start_discovery", 00:18:36.385 "params": { 00:18:36.385 "name": "nvme_second", 00:18:36.385 "trtype": "tcp", 00:18:36.385 "traddr": "10.0.0.3", 00:18:36.385 "adrfam": "ipv4", 00:18:36.385 "trsvcid": "8009", 00:18:36.385 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:36.385 "wait_for_attach": true 00:18:36.385 } 00:18:36.385 } 00:18:36.385 Got JSON-RPC error response 00:18:36.385 GoRPCClient: error on JSON-RPC call 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.385 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.643 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.644 17:18:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:37.623 [2024-12-06 17:18:19.782497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:37.623 [2024-12-06 17:18:19.782563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25048d0 with addr=10.0.0.3, port=8010 00:18:37.623 [2024-12-06 17:18:19.782585] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:37.623 [2024-12-06 17:18:19.782596] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:37.623 [2024-12-06 17:18:19.782607] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:38.584 [2024-12-06 17:18:20.782488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:38.584 [2024-12-06 17:18:20.782538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25048d0 with addr=10.0.0.3, port=8010 00:18:38.584 [2024-12-06 17:18:20.782558] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:38.584 [2024-12-06 17:18:20.782568] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:38.584 [2024-12-06 17:18:20.782577] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:18:39.518 [2024-12-06 17:18:21.782337] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:18:39.518 2024/12/06 17:18:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:18:39.518 request: 00:18:39.518 { 00:18:39.518 "method": "bdev_nvme_start_discovery", 00:18:39.518 "params": { 00:18:39.518 "name": "nvme_second", 00:18:39.518 "trtype": "tcp", 00:18:39.518 "traddr": "10.0.0.3", 00:18:39.518 "adrfam": "ipv4", 00:18:39.518 "trsvcid": "8010", 00:18:39.518 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:39.518 "wait_for_attach": false, 00:18:39.518 "attach_timeout_ms": 3000 00:18:39.518 } 00:18:39.518 } 00:18:39.518 Got JSON-RPC error response 00:18:39.518 GoRPCClient: error on JSON-RPC call 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:39.518 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 89033 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.778 rmmod nvme_tcp 00:18:39.778 rmmod nvme_fabrics 00:18:39.778 rmmod nvme_keyring 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 88995 ']' 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 88995 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 88995 ']' 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 88995 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88995 00:18:39.778 killing process with pid 88995 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88995' 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 88995 00:18:39.778 17:18:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 88995 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:40.038 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.296 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.296 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:40.296 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.296 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.296 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.296 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:18:40.296 00:18:40.296 real 0m10.110s 00:18:40.296 user 0m19.672s 00:18:40.297 sys 0m1.518s 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:40.297 ************************************ 00:18:40.297 END TEST nvmf_host_discovery 00:18:40.297 ************************************ 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:40.297 ************************************ 00:18:40.297 START TEST nvmf_host_multipath_status 00:18:40.297 ************************************ 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:40.297 * Looking for test storage... 00:18:40.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:18:40.297 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:40.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.556 --rc genhtml_branch_coverage=1 00:18:40.556 --rc genhtml_function_coverage=1 00:18:40.556 --rc genhtml_legend=1 00:18:40.556 --rc geninfo_all_blocks=1 00:18:40.556 --rc geninfo_unexecuted_blocks=1 00:18:40.556 00:18:40.556 ' 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:40.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.556 --rc genhtml_branch_coverage=1 00:18:40.556 --rc genhtml_function_coverage=1 00:18:40.556 --rc genhtml_legend=1 00:18:40.556 --rc geninfo_all_blocks=1 00:18:40.556 --rc geninfo_unexecuted_blocks=1 00:18:40.556 00:18:40.556 ' 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:40.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.556 --rc genhtml_branch_coverage=1 00:18:40.556 --rc genhtml_function_coverage=1 00:18:40.556 --rc genhtml_legend=1 00:18:40.556 --rc geninfo_all_blocks=1 00:18:40.556 --rc geninfo_unexecuted_blocks=1 00:18:40.556 00:18:40.556 ' 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:40.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.556 --rc genhtml_branch_coverage=1 00:18:40.556 --rc genhtml_function_coverage=1 00:18:40.556 --rc genhtml_legend=1 00:18:40.556 --rc geninfo_all_blocks=1 00:18:40.556 --rc geninfo_unexecuted_blocks=1 00:18:40.556 00:18:40.556 ' 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.556 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.557 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:40.557 Cannot find device "nvmf_init_br" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:40.557 Cannot find device "nvmf_init_br2" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:40.557 Cannot find device "nvmf_tgt_br" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.557 Cannot find device "nvmf_tgt_br2" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:40.557 Cannot find device "nvmf_init_br" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:40.557 Cannot find device "nvmf_init_br2" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:40.557 Cannot find device "nvmf_tgt_br" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:40.557 Cannot find device "nvmf_tgt_br2" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:40.557 Cannot find device "nvmf_br" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:40.557 Cannot find device "nvmf_init_if" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:40.557 Cannot find device "nvmf_init_if2" 00:18:40.557 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.558 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.558 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.558 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:40.817 17:18:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:40.817 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:40.817 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:18:40.817 00:18:40.817 --- 10.0.0.3 ping statistics --- 00:18:40.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.817 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:40.817 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:40.817 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:18:40.817 00:18:40.817 --- 10.0.0.4 ping statistics --- 00:18:40.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.817 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:40.817 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.817 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:18:40.817 00:18:40.817 --- 10.0.0.1 ping statistics --- 00:18:40.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.817 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:40.817 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.817 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:18:40.817 00:18:40.817 --- 10.0.0.2 ping statistics --- 00:18:40.817 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.817 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=89548 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 89548 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89548 ']' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.817 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:41.076 [2024-12-06 17:18:23.138102] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:18:41.076 [2024-12-06 17:18:23.138201] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.076 [2024-12-06 17:18:23.289406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:41.076 [2024-12-06 17:18:23.326789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.076 [2024-12-06 17:18:23.326873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.076 [2024-12-06 17:18:23.326887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.076 [2024-12-06 17:18:23.326897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.076 [2024-12-06 17:18:23.326905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.076 [2024-12-06 17:18:23.327730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.076 [2024-12-06 17:18:23.327745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.336 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.336 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:41.336 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.336 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.336 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:41.336 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.336 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=89548 00:18:41.336 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:41.596 [2024-12-06 17:18:23.766566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.596 17:18:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:41.855 Malloc0 00:18:41.855 17:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:42.423 17:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.423 17:18:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:42.990 [2024-12-06 17:18:24.983687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:42.990 [2024-12-06 17:18:25.247837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89638 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89638 /var/tmp/bdevperf.sock 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89638 ']' 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.990 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:43.556 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.556 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:43.556 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:43.814 17:18:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:44.072 Nvme0n1 00:18:44.072 17:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:44.639 Nvme0n1 00:18:44.639 17:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:44.639 17:18:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:46.580 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:46.580 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:46.839 17:18:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:47.098 17:18:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:48.033 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:48.033 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:48.033 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.033 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:48.600 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:48.600 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:48.600 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.600 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:48.858 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:48.858 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:48.858 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:48.858 17:18:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:49.117 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.117 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:49.117 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.117 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:49.376 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.376 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:49.376 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.376 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:49.633 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.633 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:49.633 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.633 17:18:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:49.891 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.891 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:49.891 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:50.148 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:50.715 17:18:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:51.648 17:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:51.648 17:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:51.648 17:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.648 17:18:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:51.905 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:51.905 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:51.905 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:51.905 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.162 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.162 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:52.162 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.162 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:52.419 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.419 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:52.419 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.419 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:52.676 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.676 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:52.676 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.676 17:18:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:53.241 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.241 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:53.241 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:53.241 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:53.241 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:53.241 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:53.241 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:53.807 17:18:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:18:54.065 17:18:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:54.998 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:54.998 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:54.998 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.998 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:55.256 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.256 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:55.256 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.256 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:55.514 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:55.514 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:55.514 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.514 17:18:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:55.773 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.773 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:55.773 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.773 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:56.340 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.340 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:56.340 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.340 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:56.340 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.340 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:56.340 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.340 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:56.908 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.908 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:56.908 17:18:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:56.908 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:57.475 17:18:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:58.413 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:58.413 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:58.413 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:58.413 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.673 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:58.673 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:58.673 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:58.673 17:18:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:58.932 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:58.932 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:58.932 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:58.932 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.191 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.191 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:59.191 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:59.191 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.468 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.468 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:59.468 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.468 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:59.727 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.727 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:59.727 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.727 17:18:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:59.986 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:59.986 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:59.986 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:00.244 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:00.520 17:18:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:01.905 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:01.905 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:01.905 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.905 17:18:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:01.905 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:01.905 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:01.905 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:01.905 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:02.164 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:02.164 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:02.164 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:02.164 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:02.423 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:02.423 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:02.423 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:02.423 17:18:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.004 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:03.004 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:03.004 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.004 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:03.261 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:03.261 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:03.261 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:03.261 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:03.519 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:03.519 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:03.519 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:19:03.776 17:18:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:04.034 17:18:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:04.968 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:04.968 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:04.968 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:04.968 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.226 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:05.226 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:05.226 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.226 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:05.793 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.793 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:05.793 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:05.793 17:18:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:05.793 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:05.793 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:05.793 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:05.793 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.359 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.359 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:06.359 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.359 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:06.618 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:06.618 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:06.618 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:06.618 17:18:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:06.877 17:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:06.877 17:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:07.136 17:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:07.136 17:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:07.394 17:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:07.960 17:18:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:08.895 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:08.895 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:08.895 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:08.895 17:18:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:09.154 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.154 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:09.154 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.154 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:09.413 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.413 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:09.413 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.413 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:09.672 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:09.672 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:09.672 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:09.672 17:18:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:10.240 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.240 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:10.240 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.240 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:10.499 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.499 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:10.499 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.499 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:10.757 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.757 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:10.757 17:18:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:11.016 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:11.275 17:18:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:12.232 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:12.232 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:12.232 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.232 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:12.799 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:12.799 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:12.799 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.799 17:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:12.799 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:12.799 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:12.799 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:12.800 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:13.366 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.366 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:13.366 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.366 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:13.644 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.644 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:13.644 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.644 17:18:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:13.917 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.917 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:13.917 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.917 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:14.175 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.175 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:14.176 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:14.434 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:14.692 17:18:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:16.069 17:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:16.069 17:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:16.069 17:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.069 17:18:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:16.069 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.069 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:16.069 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:16.069 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.327 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.327 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:16.327 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:16.327 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.893 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.893 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:16.893 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.893 17:18:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:17.152 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.152 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:17.152 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:17.152 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.410 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.410 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:17.410 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.410 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:17.668 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.668 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:17.668 17:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:17.926 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:18.185 17:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:19.121 17:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:19.121 17:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:19.121 17:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.122 17:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:19.690 17:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.690 17:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:19.690 17:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:19.690 17:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.950 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:19.950 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:19.950 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:19.950 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.210 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.210 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:20.210 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.210 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:20.468 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.468 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:20.468 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.468 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:20.727 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.727 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:20.727 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.727 17:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89638 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89638 ']' 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89638 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89638 00:19:21.318 killing process with pid 89638 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89638' 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89638 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89638 00:19:21.318 { 00:19:21.318 "results": [ 00:19:21.318 { 00:19:21.318 "job": "Nvme0n1", 00:19:21.318 "core_mask": "0x4", 00:19:21.318 "workload": "verify", 00:19:21.318 "status": "terminated", 00:19:21.318 "verify_range": { 00:19:21.318 "start": 0, 00:19:21.318 "length": 16384 00:19:21.318 }, 00:19:21.318 "queue_depth": 128, 00:19:21.318 "io_size": 4096, 00:19:21.318 "runtime": 36.519303, 00:19:21.318 "iops": 8517.961035565218, 00:19:21.318 "mibps": 33.273285295176635, 00:19:21.318 "io_failed": 0, 00:19:21.318 "io_timeout": 0, 00:19:21.318 "avg_latency_us": 14997.863956303318, 00:19:21.318 "min_latency_us": 146.15272727272728, 00:19:21.318 "max_latency_us": 4087539.898181818 00:19:21.318 } 00:19:21.318 ], 00:19:21.318 "core_count": 1 00:19:21.318 } 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89638 00:19:21.318 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:21.318 [2024-12-06 17:18:25.339838] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:19:21.318 [2024-12-06 17:18:25.339982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89638 ] 00:19:21.319 [2024-12-06 17:18:25.490098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.319 [2024-12-06 17:18:25.522513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.319 Running I/O for 90 seconds... 00:19:21.319 8865.00 IOPS, 34.63 MiB/s [2024-12-06T17:19:03.617Z] 8901.00 IOPS, 34.77 MiB/s [2024-12-06T17:19:03.617Z] 8918.67 IOPS, 34.84 MiB/s [2024-12-06T17:19:03.617Z] 8889.50 IOPS, 34.72 MiB/s [2024-12-06T17:19:03.617Z] 8922.20 IOPS, 34.85 MiB/s [2024-12-06T17:19:03.617Z] 8925.00 IOPS, 34.86 MiB/s [2024-12-06T17:19:03.617Z] 8931.86 IOPS, 34.89 MiB/s [2024-12-06T17:19:03.617Z] 8941.50 IOPS, 34.93 MiB/s [2024-12-06T17:19:03.617Z] 8948.33 IOPS, 34.95 MiB/s [2024-12-06T17:19:03.617Z] 8951.30 IOPS, 34.97 MiB/s [2024-12-06T17:19:03.617Z] 8944.45 IOPS, 34.94 MiB/s [2024-12-06T17:19:03.617Z] 8938.25 IOPS, 34.92 MiB/s [2024-12-06T17:19:03.617Z] 8944.31 IOPS, 34.94 MiB/s [2024-12-06T17:19:03.617Z] 8949.50 IOPS, 34.96 MiB/s [2024-12-06T17:19:03.617Z] 8954.60 IOPS, 34.98 MiB/s [2024-12-06T17:19:03.617Z] [2024-12-06 17:18:42.427995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.319 [2024-12-06 17:18:42.428068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.428967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.428984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.319 [2024-12-06 17:18:42.429430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.319 [2024-12-06 17:18:42.429454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.429470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.429492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.429508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.429530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.429547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.429569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.429585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.429607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.429622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.429644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.429661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.431304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.431351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.320 [2024-12-06 17:18:42.431390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.431960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.431976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.320 [2024-12-06 17:18:42.432566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.320 [2024-12-06 17:18:42.432582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.432604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.432620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.432647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.432662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.432684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.432699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.432721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.432737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.432759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.432775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.432796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.432812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.432833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.432849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.433506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.433531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.434448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.321 [2024-12-06 17:18:42.434511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.434970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.434985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.321 [2024-12-06 17:18:42.435667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.321 [2024-12-06 17:18:42.435683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.435704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.435720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.435742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.435757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.435780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.435795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.435817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.435832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.435854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.435869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.435891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.435906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.435928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.435944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.435965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.435987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.436963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.436991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.322 [2024-12-06 17:18:42.437723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.322 [2024-12-06 17:18:42.437745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.437762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.437785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.437800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.437823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.437838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.437860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.437875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.437896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.437912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.437933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.437949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.437979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.437998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.323 [2024-12-06 17:18:42.438578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.438968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.438991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.439007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:21.323 [2024-12-06 17:18:42.439028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.323 [2024-12-06 17:18:42.439047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.439070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.439086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.439108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.439124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.439950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.439981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.440754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.440785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.451485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.451569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.451610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.451647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.451684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.324 [2024-12-06 17:18:42.451721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.451759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.451796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.451835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.451873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.451911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.451948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.451985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.452003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.452025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.452041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.452063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.452078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:21.324 [2024-12-06 17:18:42.452100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.324 [2024-12-06 17:18:42.452116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.452976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.452993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.453014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.453030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.453052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.453068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.453090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.453105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.453127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.453143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.453165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.453187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.453997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.325 [2024-12-06 17:18:42.454473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.325 [2024-12-06 17:18:42.454488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.454960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.454981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.455950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.455980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.456001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.456031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.456061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.456093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.456116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.456146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.456168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.456198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.456219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.456249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.456286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.456319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.456341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.456372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.326 [2024-12-06 17:18:42.456393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:21.326 [2024-12-06 17:18:42.456423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.327 [2024-12-06 17:18:42.456445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.327 [2024-12-06 17:18:42.456497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.456558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.456609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.456660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.456711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.456794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.456847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.456898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.456949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.456980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.457001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.457032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.457053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.457083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.457104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.457134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.457155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.457186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.457208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.458966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.458987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.459018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.459039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.459069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.459091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.459121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.459143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.459174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.459206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.459238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.459260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.459309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.459333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.327 [2024-12-06 17:18:42.459363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.327 [2024-12-06 17:18:42.459384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.328 [2024-12-06 17:18:42.459436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.328 [2024-12-06 17:18:42.459488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.328 [2024-12-06 17:18:42.459539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.328 [2024-12-06 17:18:42.459591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.328 [2024-12-06 17:18:42.459643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.328 [2024-12-06 17:18:42.459694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.328 [2024-12-06 17:18:42.459746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.328 [2024-12-06 17:18:42.459797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.459860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.459915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.459966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.459996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.460967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.460989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.328 [2024-12-06 17:18:42.461473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.328 [2024-12-06 17:18:42.461494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.461524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.461546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.461576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.461598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.461628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.461653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.461683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.461705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.461736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.461758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.462776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.462815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.462853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.462896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.462929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.462951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.462981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.463952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.463973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.329 [2024-12-06 17:18:42.464572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.329 [2024-12-06 17:18:42.464602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.464624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.464654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.464675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.464705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.464727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.464757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.464778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.464808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.464830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.464872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.464895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.464926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.464947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.464977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.464998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.330 [2024-12-06 17:18:42.465607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.465659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.465711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.465764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.465815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.465867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.465918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.465948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.465970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.466000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.466021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.466051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.466072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.466102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.466124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.466154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.466189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.466222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.466245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.467195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.467242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.467297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.467339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.467376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.467414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.467451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.330 [2024-12-06 17:18:42.467489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.330 [2024-12-06 17:18:42.467510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.467980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.467996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.468033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.468070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.468117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.468154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.468192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.468229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.331 [2024-12-06 17:18:42.468278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.331 [2024-12-06 17:18:42.468982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.331 [2024-12-06 17:18:42.468997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.469618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.469634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.470976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.470998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.332 [2024-12-06 17:18:42.471014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.332 [2024-12-06 17:18:42.471035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.471970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.471985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.333 [2024-12-06 17:18:42.472421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.333 [2024-12-06 17:18:42.472458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.333 [2024-12-06 17:18:42.472495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.333 [2024-12-06 17:18:42.472532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:21.333 [2024-12-06 17:18:42.472554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.472569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.472591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.472606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.472636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.472653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.472675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.472690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.472712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.472728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.472750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.472768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.472791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.472807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.472829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.472845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.473961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.473977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.334 [2024-12-06 17:18:42.474790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.334 [2024-12-06 17:18:42.474828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:21.334 [2024-12-06 17:18:42.474849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.334 [2024-12-06 17:18:42.474873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.474896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.474912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.474934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.474950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.474972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.474987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.475981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.475997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.476018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.476034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.476055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.476071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.476094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.476110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.476812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.476840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.476867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.476886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.476909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.476925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.476947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.476962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.476984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.477011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.477035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.477051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.335 [2024-12-06 17:18:42.477073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.335 [2024-12-06 17:18:42.477088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.477953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.477983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.336 [2024-12-06 17:18:42.478401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.336 [2024-12-06 17:18:42.478426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.478761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.478783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.486149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.486474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.486514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.486552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.337 [2024-12-06 17:18:42.486604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.486962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.486977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.487962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.337 [2024-12-06 17:18:42.487979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.337 [2024-12-06 17:18:42.488005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.338 [2024-12-06 17:18:42.488631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.488673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.488715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.488758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.488800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.488842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.488883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.488924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.488966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.488991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:21.338 [2024-12-06 17:18:42.489655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.338 [2024-12-06 17:18:42.489671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.489696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.489712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.489738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.489753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.489779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.489795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.489821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.489836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.489862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.489878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.489904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.489919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.489945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.489961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.489986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.490001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.490029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.490045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:18:42.490225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:18:42.490250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:21.339 8742.19 IOPS, 34.15 MiB/s [2024-12-06T17:19:03.637Z] 8227.94 IOPS, 32.14 MiB/s [2024-12-06T17:19:03.637Z] 7770.83 IOPS, 30.35 MiB/s [2024-12-06T17:19:03.637Z] 7361.84 IOPS, 28.76 MiB/s [2024-12-06T17:19:03.637Z] 7136.20 IOPS, 27.88 MiB/s [2024-12-06T17:19:03.637Z] 7219.43 IOPS, 28.20 MiB/s [2024-12-06T17:19:03.637Z] 7298.95 IOPS, 28.51 MiB/s [2024-12-06T17:19:03.637Z] 7381.65 IOPS, 28.83 MiB/s [2024-12-06T17:19:03.637Z] 7562.67 IOPS, 29.54 MiB/s [2024-12-06T17:19:03.637Z] 7743.00 IOPS, 30.25 MiB/s [2024-12-06T17:19:03.637Z] 7910.62 IOPS, 30.90 MiB/s [2024-12-06T17:19:03.637Z] 7995.33 IOPS, 31.23 MiB/s [2024-12-06T17:19:03.637Z] 8032.21 IOPS, 31.38 MiB/s [2024-12-06T17:19:03.637Z] 8064.52 IOPS, 31.50 MiB/s [2024-12-06T17:19:03.637Z] 8097.13 IOPS, 31.63 MiB/s [2024-12-06T17:19:03.637Z] 8206.84 IOPS, 32.06 MiB/s [2024-12-06T17:19:03.637Z] 8315.72 IOPS, 32.48 MiB/s [2024-12-06T17:19:03.637Z] 8427.94 IOPS, 32.92 MiB/s [2024-12-06T17:19:03.637Z] [2024-12-06 17:19:00.390100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:48992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.339 [2024-12-06 17:19:00.390872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.390967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.390982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.391003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.391018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.391040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.391067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.391091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.391107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.391129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.391145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.392185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.392215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:21.339 [2024-12-06 17:19:00.392244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.339 [2024-12-06 17:19:00.392277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:48936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:48968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.392962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.392985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.393025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.393062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.393190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.393227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.393276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.393317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.340 [2024-12-06 17:19:00.393355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.340 [2024-12-06 17:19:00.393782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:21.340 [2024-12-06 17:19:00.393803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.341 [2024-12-06 17:19:00.393819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:21.341 [2024-12-06 17:19:00.393840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.341 [2024-12-06 17:19:00.393856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:21.341 [2024-12-06 17:19:00.393878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.341 [2024-12-06 17:19:00.393893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:21.341 [2024-12-06 17:19:00.393915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.341 [2024-12-06 17:19:00.393931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:21.341 8487.71 IOPS, 33.16 MiB/s [2024-12-06T17:19:03.639Z] 8502.97 IOPS, 33.21 MiB/s [2024-12-06T17:19:03.639Z] 8515.19 IOPS, 33.26 MiB/s [2024-12-06T17:19:03.639Z] Received shutdown signal, test time was about 36.520139 seconds 00:19:21.341 00:19:21.341 Latency(us) 00:19:21.341 [2024-12-06T17:19:03.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.341 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:21.341 Verification LBA range: start 0x0 length 0x4000 00:19:21.341 Nvme0n1 : 36.52 8517.96 33.27 0.00 0.00 14997.86 146.15 4087539.90 00:19:21.341 [2024-12-06T17:19:03.639Z] =================================================================================================================== 00:19:21.341 [2024-12-06T17:19:03.639Z] Total : 8517.96 33.27 0.00 0.00 14997.86 146.15 4087539.90 00:19:21.341 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:21.599 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:21.599 rmmod nvme_tcp 00:19:21.599 rmmod nvme_fabrics 00:19:21.599 rmmod nvme_keyring 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 89548 ']' 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 89548 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89548 ']' 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89548 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89548 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.858 killing process with pid 89548 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89548' 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89548 00:19:21.858 17:19:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89548 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:21.858 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:19:22.117 00:19:22.117 real 0m41.881s 00:19:22.117 user 2m19.126s 00:19:22.117 sys 0m9.609s 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:22.117 ************************************ 00:19:22.117 END TEST nvmf_host_multipath_status 00:19:22.117 ************************************ 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:22.117 ************************************ 00:19:22.117 START TEST nvmf_discovery_remove_ifc 00:19:22.117 ************************************ 00:19:22.117 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:22.376 * Looking for test storage... 00:19:22.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:22.376 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.376 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.376 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.376 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.377 --rc genhtml_branch_coverage=1 00:19:22.377 --rc genhtml_function_coverage=1 00:19:22.377 --rc genhtml_legend=1 00:19:22.377 --rc geninfo_all_blocks=1 00:19:22.377 --rc geninfo_unexecuted_blocks=1 00:19:22.377 00:19:22.377 ' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.377 --rc genhtml_branch_coverage=1 00:19:22.377 --rc genhtml_function_coverage=1 00:19:22.377 --rc genhtml_legend=1 00:19:22.377 --rc geninfo_all_blocks=1 00:19:22.377 --rc geninfo_unexecuted_blocks=1 00:19:22.377 00:19:22.377 ' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.377 --rc genhtml_branch_coverage=1 00:19:22.377 --rc genhtml_function_coverage=1 00:19:22.377 --rc genhtml_legend=1 00:19:22.377 --rc geninfo_all_blocks=1 00:19:22.377 --rc geninfo_unexecuted_blocks=1 00:19:22.377 00:19:22.377 ' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.377 --rc genhtml_branch_coverage=1 00:19:22.377 --rc genhtml_function_coverage=1 00:19:22.377 --rc genhtml_legend=1 00:19:22.377 --rc geninfo_all_blocks=1 00:19:22.377 --rc geninfo_unexecuted_blocks=1 00:19:22.377 00:19:22.377 ' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.377 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:22.377 Cannot find device "nvmf_init_br" 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:22.377 Cannot find device "nvmf_init_br2" 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:22.377 Cannot find device "nvmf_tgt_br" 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.377 Cannot find device "nvmf_tgt_br2" 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:22.377 Cannot find device "nvmf_init_br" 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:22.377 Cannot find device "nvmf_init_br2" 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:22.377 Cannot find device "nvmf_tgt_br" 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:22.377 Cannot find device "nvmf_tgt_br2" 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:19:22.377 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:22.636 Cannot find device "nvmf_br" 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:22.636 Cannot find device "nvmf_init_if" 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:22.636 Cannot find device "nvmf_init_if2" 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.636 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:22.636 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.636 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:19:22.636 00:19:22.636 --- 10.0.0.3 ping statistics --- 00:19:22.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.636 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:22.636 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:22.636 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:19:22.636 00:19:22.636 --- 10.0.0.4 ping statistics --- 00:19:22.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.636 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:22.636 00:19:22.636 --- 10.0.0.1 ping statistics --- 00:19:22.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.636 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:22.636 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:22.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:19:22.636 00:19:22.636 --- 10.0.0.2 ping statistics --- 00:19:22.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.637 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:22.637 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=91014 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 91014 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91014 ']' 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.896 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.897 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.897 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.897 17:19:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:22.897 [2024-12-06 17:19:05.014859] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:19:22.897 [2024-12-06 17:19:05.014963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.897 [2024-12-06 17:19:05.161706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.154 [2024-12-06 17:19:05.210578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.154 [2024-12-06 17:19:05.210642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.154 [2024-12-06 17:19:05.210661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.154 [2024-12-06 17:19:05.210675] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.154 [2024-12-06 17:19:05.210695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.154 [2024-12-06 17:19:05.211101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:24.087 [2024-12-06 17:19:06.091336] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:24.087 [2024-12-06 17:19:06.099502] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:24.087 null0 00:19:24.087 [2024-12-06 17:19:06.131435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91064 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91064 /tmp/host.sock 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 91064 ']' 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.087 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:24.087 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.088 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:24.088 [2024-12-06 17:19:06.205048] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:19:24.088 [2024-12-06 17:19:06.205128] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91064 ] 00:19:24.088 [2024-12-06 17:19:06.350878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.345 [2024-12-06 17:19:06.389473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.345 17:19:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.280 [2024-12-06 17:19:07.554537] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:25.280 [2024-12-06 17:19:07.554581] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:25.280 [2024-12-06 17:19:07.554601] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:25.538 [2024-12-06 17:19:07.640665] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:25.538 [2024-12-06 17:19:07.695233] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:25.538 [2024-12-06 17:19:07.696053] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xbdc110:1 started. 00:19:25.538 [2024-12-06 17:19:07.697825] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:25.538 [2024-12-06 17:19:07.697902] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:25.538 [2024-12-06 17:19:07.697934] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:25.538 [2024-12-06 17:19:07.697952] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:25.538 [2024-12-06 17:19:07.697977] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:25.538 [2024-12-06 17:19:07.702975] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xbdc110 was disconnected and freed. delete nvme_qpair. 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:25.538 17:19:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:26.915 17:19:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:27.851 17:19:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:28.792 17:19:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:29.750 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:29.750 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:29.750 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.750 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:29.750 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:29.750 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:29.750 17:19:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:29.750 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.007 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:30.007 17:19:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:30.942 17:19:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:30.942 [2024-12-06 17:19:13.125839] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:30.942 [2024-12-06 17:19:13.125905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.942 [2024-12-06 17:19:13.125922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.942 [2024-12-06 17:19:13.125935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.942 [2024-12-06 17:19:13.125945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.942 [2024-12-06 17:19:13.125956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.942 [2024-12-06 17:19:13.125965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.942 [2024-12-06 17:19:13.125975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.942 [2024-12-06 17:19:13.125985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.942 [2024-12-06 17:19:13.125995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:30.942 [2024-12-06 17:19:13.126005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:30.942 [2024-12-06 17:19:13.126014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e290 is same with the state(6) to be set 00:19:30.942 [2024-12-06 17:19:13.135835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1e290 (9): Bad file descriptor 00:19:30.942 [2024-12-06 17:19:13.145852] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:19:30.942 [2024-12-06 17:19:13.145879] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:19:30.942 [2024-12-06 17:19:13.145903] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:30.942 [2024-12-06 17:19:13.145909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:30.942 [2024-12-06 17:19:13.145947] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:31.878 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:31.878 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:31.878 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:31.878 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.879 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:31.879 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:31.879 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:31.879 [2024-12-06 17:19:14.167396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:31.879 [2024-12-06 17:19:14.167512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb1e290 with addr=10.0.0.3, port=4420 00:19:31.879 [2024-12-06 17:19:14.167547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1e290 is same with the state(6) to be set 00:19:31.879 [2024-12-06 17:19:14.167615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1e290 (9): Bad file descriptor 00:19:31.879 [2024-12-06 17:19:14.168572] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:19:31.879 [2024-12-06 17:19:14.168682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:31.879 [2024-12-06 17:19:14.168724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:31.879 [2024-12-06 17:19:14.168759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:31.879 [2024-12-06 17:19:14.168790] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:31.879 [2024-12-06 17:19:14.168808] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:31.879 [2024-12-06 17:19:14.168819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:31.879 [2024-12-06 17:19:14.168841] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:19:31.879 [2024-12-06 17:19:14.168854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:19:32.138 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.138 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:32.138 17:19:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:33.073 [2024-12-06 17:19:15.168942] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:19:33.073 [2024-12-06 17:19:15.169005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:19:33.073 [2024-12-06 17:19:15.169041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:19:33.073 [2024-12-06 17:19:15.169069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:19:33.073 [2024-12-06 17:19:15.169082] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:19:33.073 [2024-12-06 17:19:15.169092] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:19:33.073 [2024-12-06 17:19:15.169099] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:19:33.073 [2024-12-06 17:19:15.169105] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:19:33.073 [2024-12-06 17:19:15.169140] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:19:33.073 [2024-12-06 17:19:15.169199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.073 [2024-12-06 17:19:15.169215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.073 [2024-12-06 17:19:15.169231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.073 [2024-12-06 17:19:15.169240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.073 [2024-12-06 17:19:15.169251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.073 [2024-12-06 17:19:15.169261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.073 [2024-12-06 17:19:15.169286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.073 [2024-12-06 17:19:15.169315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.073 [2024-12-06 17:19:15.169345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.074 [2024-12-06 17:19:15.169355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.074 [2024-12-06 17:19:15.169366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:19:33.074 [2024-12-06 17:19:15.169389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb4a800 (9): Bad file descriptor 00:19:33.074 [2024-12-06 17:19:15.170017] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:33.074 [2024-12-06 17:19:15.170040] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:33.074 17:19:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:34.449 17:19:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:35.020 [2024-12-06 17:19:17.176363] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:35.020 [2024-12-06 17:19:17.176405] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:35.020 [2024-12-06 17:19:17.176427] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:35.020 [2024-12-06 17:19:17.262505] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:19:35.020 [2024-12-06 17:19:17.316988] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:19:35.282 [2024-12-06 17:19:17.317640] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xbb32d0:1 started. 00:19:35.282 [2024-12-06 17:19:17.318846] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:35.282 [2024-12-06 17:19:17.318895] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:35.282 [2024-12-06 17:19:17.318921] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:35.282 [2024-12-06 17:19:17.318938] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:19:35.282 [2024-12-06 17:19:17.318947] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:35.282 [2024-12-06 17:19:17.324772] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xbb32d0 was disconnected and freed. delete nvme_qpair. 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91064 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91064 ']' 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91064 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91064 00:19:35.283 killing process with pid 91064 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91064' 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91064 00:19:35.283 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91064 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:35.546 rmmod nvme_tcp 00:19:35.546 rmmod nvme_fabrics 00:19:35.546 rmmod nvme_keyring 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:35.546 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 91014 ']' 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 91014 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 91014 ']' 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 91014 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91014 00:19:35.547 killing process with pid 91014 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91014' 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 91014 00:19:35.547 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 91014 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:35.812 17:19:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:35.812 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:35.812 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:35.812 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:35.812 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:35.812 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:35.812 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.812 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:35.812 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:19:36.079 00:19:36.079 real 0m13.771s 00:19:36.079 user 0m24.086s 00:19:36.079 sys 0m1.572s 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.079 ************************************ 00:19:36.079 END TEST nvmf_discovery_remove_ifc 00:19:36.079 ************************************ 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.079 ************************************ 00:19:36.079 START TEST nvmf_identify_kernel_target 00:19:36.079 ************************************ 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:36.079 * Looking for test storage... 00:19:36.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:36.079 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:36.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.080 --rc genhtml_branch_coverage=1 00:19:36.080 --rc genhtml_function_coverage=1 00:19:36.080 --rc genhtml_legend=1 00:19:36.080 --rc geninfo_all_blocks=1 00:19:36.080 --rc geninfo_unexecuted_blocks=1 00:19:36.080 00:19:36.080 ' 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:36.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.080 --rc genhtml_branch_coverage=1 00:19:36.080 --rc genhtml_function_coverage=1 00:19:36.080 --rc genhtml_legend=1 00:19:36.080 --rc geninfo_all_blocks=1 00:19:36.080 --rc geninfo_unexecuted_blocks=1 00:19:36.080 00:19:36.080 ' 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:36.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.080 --rc genhtml_branch_coverage=1 00:19:36.080 --rc genhtml_function_coverage=1 00:19:36.080 --rc genhtml_legend=1 00:19:36.080 --rc geninfo_all_blocks=1 00:19:36.080 --rc geninfo_unexecuted_blocks=1 00:19:36.080 00:19:36.080 ' 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:36.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.080 --rc genhtml_branch_coverage=1 00:19:36.080 --rc genhtml_function_coverage=1 00:19:36.080 --rc genhtml_legend=1 00:19:36.080 --rc geninfo_all_blocks=1 00:19:36.080 --rc geninfo_unexecuted_blocks=1 00:19:36.080 00:19:36.080 ' 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:36.080 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.348 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:36.348 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.348 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.348 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.348 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.349 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:36.349 Cannot find device "nvmf_init_br" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:36.349 Cannot find device "nvmf_init_br2" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:36.349 Cannot find device "nvmf_tgt_br" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.349 Cannot find device "nvmf_tgt_br2" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:36.349 Cannot find device "nvmf_init_br" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:36.349 Cannot find device "nvmf_init_br2" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:36.349 Cannot find device "nvmf_tgt_br" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:36.349 Cannot find device "nvmf_tgt_br2" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:36.349 Cannot find device "nvmf_br" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:36.349 Cannot find device "nvmf_init_if" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:36.349 Cannot find device "nvmf_init_if2" 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:36.349 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:36.350 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:36.620 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:36.620 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:19:36.620 00:19:36.620 --- 10.0.0.3 ping statistics --- 00:19:36.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.620 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:36.620 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:36.620 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:19:36.620 00:19:36.620 --- 10.0.0.4 ping statistics --- 00:19:36.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.620 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:36.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:19:36.620 00:19:36.620 --- 10.0.0.1 ping statistics --- 00:19:36.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.620 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:36.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:36.620 00:19:36.620 --- 10.0.0.2 ping statistics --- 00:19:36.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.620 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:36.620 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:36.621 17:19:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:36.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:36.883 Waiting for block devices as requested 00:19:36.883 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:37.142 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:37.142 No valid GPT data, bailing 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:37.142 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:37.400 No valid GPT data, bailing 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:37.400 No valid GPT data, bailing 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:37.400 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:37.401 No valid GPT data, bailing 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:37.401 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -a 10.0.0.1 -t tcp -s 4420 00:19:37.659 00:19:37.659 Discovery Log Number of Records 2, Generation counter 2 00:19:37.659 =====Discovery Log Entry 0====== 00:19:37.659 trtype: tcp 00:19:37.659 adrfam: ipv4 00:19:37.659 subtype: current discovery subsystem 00:19:37.659 treq: not specified, sq flow control disable supported 00:19:37.659 portid: 1 00:19:37.659 trsvcid: 4420 00:19:37.659 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:37.659 traddr: 10.0.0.1 00:19:37.659 eflags: none 00:19:37.659 sectype: none 00:19:37.659 =====Discovery Log Entry 1====== 00:19:37.659 trtype: tcp 00:19:37.659 adrfam: ipv4 00:19:37.659 subtype: nvme subsystem 00:19:37.659 treq: not specified, sq flow control disable supported 00:19:37.659 portid: 1 00:19:37.659 trsvcid: 4420 00:19:37.659 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:37.659 traddr: 10.0.0.1 00:19:37.659 eflags: none 00:19:37.659 sectype: none 00:19:37.659 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:19:37.659 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:37.659 ===================================================== 00:19:37.659 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:37.659 ===================================================== 00:19:37.659 Controller Capabilities/Features 00:19:37.659 ================================ 00:19:37.659 Vendor ID: 0000 00:19:37.659 Subsystem Vendor ID: 0000 00:19:37.659 Serial Number: af5c078c877599ec2580 00:19:37.659 Model Number: Linux 00:19:37.659 Firmware Version: 6.8.9-20 00:19:37.659 Recommended Arb Burst: 0 00:19:37.659 IEEE OUI Identifier: 00 00 00 00:19:37.659 Multi-path I/O 00:19:37.659 May have multiple subsystem ports: No 00:19:37.659 May have multiple controllers: No 00:19:37.659 Associated with SR-IOV VF: No 00:19:37.659 Max Data Transfer Size: Unlimited 00:19:37.659 Max Number of Namespaces: 0 00:19:37.659 Max Number of I/O Queues: 1024 00:19:37.659 NVMe Specification Version (VS): 1.3 00:19:37.659 NVMe Specification Version (Identify): 1.3 00:19:37.659 Maximum Queue Entries: 1024 00:19:37.659 Contiguous Queues Required: No 00:19:37.659 Arbitration Mechanisms Supported 00:19:37.659 Weighted Round Robin: Not Supported 00:19:37.659 Vendor Specific: Not Supported 00:19:37.659 Reset Timeout: 7500 ms 00:19:37.659 Doorbell Stride: 4 bytes 00:19:37.659 NVM Subsystem Reset: Not Supported 00:19:37.659 Command Sets Supported 00:19:37.659 NVM Command Set: Supported 00:19:37.659 Boot Partition: Not Supported 00:19:37.659 Memory Page Size Minimum: 4096 bytes 00:19:37.659 Memory Page Size Maximum: 4096 bytes 00:19:37.659 Persistent Memory Region: Not Supported 00:19:37.659 Optional Asynchronous Events Supported 00:19:37.659 Namespace Attribute Notices: Not Supported 00:19:37.659 Firmware Activation Notices: Not Supported 00:19:37.659 ANA Change Notices: Not Supported 00:19:37.659 PLE Aggregate Log Change Notices: Not Supported 00:19:37.659 LBA Status Info Alert Notices: Not Supported 00:19:37.659 EGE Aggregate Log Change Notices: Not Supported 00:19:37.659 Normal NVM Subsystem Shutdown event: Not Supported 00:19:37.659 Zone Descriptor Change Notices: Not Supported 00:19:37.659 Discovery Log Change Notices: Supported 00:19:37.659 Controller Attributes 00:19:37.659 128-bit Host Identifier: Not Supported 00:19:37.659 Non-Operational Permissive Mode: Not Supported 00:19:37.659 NVM Sets: Not Supported 00:19:37.659 Read Recovery Levels: Not Supported 00:19:37.659 Endurance Groups: Not Supported 00:19:37.659 Predictable Latency Mode: Not Supported 00:19:37.659 Traffic Based Keep ALive: Not Supported 00:19:37.659 Namespace Granularity: Not Supported 00:19:37.659 SQ Associations: Not Supported 00:19:37.659 UUID List: Not Supported 00:19:37.659 Multi-Domain Subsystem: Not Supported 00:19:37.659 Fixed Capacity Management: Not Supported 00:19:37.659 Variable Capacity Management: Not Supported 00:19:37.659 Delete Endurance Group: Not Supported 00:19:37.659 Delete NVM Set: Not Supported 00:19:37.659 Extended LBA Formats Supported: Not Supported 00:19:37.659 Flexible Data Placement Supported: Not Supported 00:19:37.659 00:19:37.659 Controller Memory Buffer Support 00:19:37.659 ================================ 00:19:37.659 Supported: No 00:19:37.659 00:19:37.659 Persistent Memory Region Support 00:19:37.659 ================================ 00:19:37.659 Supported: No 00:19:37.659 00:19:37.659 Admin Command Set Attributes 00:19:37.659 ============================ 00:19:37.659 Security Send/Receive: Not Supported 00:19:37.659 Format NVM: Not Supported 00:19:37.659 Firmware Activate/Download: Not Supported 00:19:37.659 Namespace Management: Not Supported 00:19:37.659 Device Self-Test: Not Supported 00:19:37.659 Directives: Not Supported 00:19:37.659 NVMe-MI: Not Supported 00:19:37.659 Virtualization Management: Not Supported 00:19:37.659 Doorbell Buffer Config: Not Supported 00:19:37.659 Get LBA Status Capability: Not Supported 00:19:37.659 Command & Feature Lockdown Capability: Not Supported 00:19:37.659 Abort Command Limit: 1 00:19:37.659 Async Event Request Limit: 1 00:19:37.659 Number of Firmware Slots: N/A 00:19:37.659 Firmware Slot 1 Read-Only: N/A 00:19:37.659 Firmware Activation Without Reset: N/A 00:19:37.659 Multiple Update Detection Support: N/A 00:19:37.659 Firmware Update Granularity: No Information Provided 00:19:37.659 Per-Namespace SMART Log: No 00:19:37.659 Asymmetric Namespace Access Log Page: Not Supported 00:19:37.659 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:37.659 Command Effects Log Page: Not Supported 00:19:37.659 Get Log Page Extended Data: Supported 00:19:37.659 Telemetry Log Pages: Not Supported 00:19:37.659 Persistent Event Log Pages: Not Supported 00:19:37.659 Supported Log Pages Log Page: May Support 00:19:37.660 Commands Supported & Effects Log Page: Not Supported 00:19:37.660 Feature Identifiers & Effects Log Page:May Support 00:19:37.660 NVMe-MI Commands & Effects Log Page: May Support 00:19:37.660 Data Area 4 for Telemetry Log: Not Supported 00:19:37.660 Error Log Page Entries Supported: 1 00:19:37.660 Keep Alive: Not Supported 00:19:37.660 00:19:37.660 NVM Command Set Attributes 00:19:37.660 ========================== 00:19:37.660 Submission Queue Entry Size 00:19:37.660 Max: 1 00:19:37.660 Min: 1 00:19:37.660 Completion Queue Entry Size 00:19:37.660 Max: 1 00:19:37.660 Min: 1 00:19:37.660 Number of Namespaces: 0 00:19:37.660 Compare Command: Not Supported 00:19:37.660 Write Uncorrectable Command: Not Supported 00:19:37.660 Dataset Management Command: Not Supported 00:19:37.660 Write Zeroes Command: Not Supported 00:19:37.660 Set Features Save Field: Not Supported 00:19:37.660 Reservations: Not Supported 00:19:37.660 Timestamp: Not Supported 00:19:37.660 Copy: Not Supported 00:19:37.660 Volatile Write Cache: Not Present 00:19:37.660 Atomic Write Unit (Normal): 1 00:19:37.660 Atomic Write Unit (PFail): 1 00:19:37.660 Atomic Compare & Write Unit: 1 00:19:37.660 Fused Compare & Write: Not Supported 00:19:37.660 Scatter-Gather List 00:19:37.660 SGL Command Set: Supported 00:19:37.660 SGL Keyed: Not Supported 00:19:37.660 SGL Bit Bucket Descriptor: Not Supported 00:19:37.660 SGL Metadata Pointer: Not Supported 00:19:37.660 Oversized SGL: Not Supported 00:19:37.660 SGL Metadata Address: Not Supported 00:19:37.660 SGL Offset: Supported 00:19:37.660 Transport SGL Data Block: Not Supported 00:19:37.660 Replay Protected Memory Block: Not Supported 00:19:37.660 00:19:37.660 Firmware Slot Information 00:19:37.660 ========================= 00:19:37.660 Active slot: 0 00:19:37.660 00:19:37.660 00:19:37.660 Error Log 00:19:37.660 ========= 00:19:37.660 00:19:37.660 Active Namespaces 00:19:37.660 ================= 00:19:37.660 Discovery Log Page 00:19:37.660 ================== 00:19:37.660 Generation Counter: 2 00:19:37.660 Number of Records: 2 00:19:37.660 Record Format: 0 00:19:37.660 00:19:37.660 Discovery Log Entry 0 00:19:37.660 ---------------------- 00:19:37.660 Transport Type: 3 (TCP) 00:19:37.660 Address Family: 1 (IPv4) 00:19:37.660 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:37.660 Entry Flags: 00:19:37.660 Duplicate Returned Information: 0 00:19:37.660 Explicit Persistent Connection Support for Discovery: 0 00:19:37.660 Transport Requirements: 00:19:37.660 Secure Channel: Not Specified 00:19:37.660 Port ID: 1 (0x0001) 00:19:37.660 Controller ID: 65535 (0xffff) 00:19:37.660 Admin Max SQ Size: 32 00:19:37.660 Transport Service Identifier: 4420 00:19:37.660 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:37.660 Transport Address: 10.0.0.1 00:19:37.660 Discovery Log Entry 1 00:19:37.660 ---------------------- 00:19:37.660 Transport Type: 3 (TCP) 00:19:37.660 Address Family: 1 (IPv4) 00:19:37.660 Subsystem Type: 2 (NVM Subsystem) 00:19:37.660 Entry Flags: 00:19:37.660 Duplicate Returned Information: 0 00:19:37.660 Explicit Persistent Connection Support for Discovery: 0 00:19:37.660 Transport Requirements: 00:19:37.660 Secure Channel: Not Specified 00:19:37.660 Port ID: 1 (0x0001) 00:19:37.660 Controller ID: 65535 (0xffff) 00:19:37.660 Admin Max SQ Size: 32 00:19:37.660 Transport Service Identifier: 4420 00:19:37.660 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:37.660 Transport Address: 10.0.0.1 00:19:37.660 17:19:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:37.920 get_feature(0x01) failed 00:19:37.920 get_feature(0x02) failed 00:19:37.920 get_feature(0x04) failed 00:19:37.920 ===================================================== 00:19:37.920 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:19:37.920 ===================================================== 00:19:37.920 Controller Capabilities/Features 00:19:37.920 ================================ 00:19:37.920 Vendor ID: 0000 00:19:37.920 Subsystem Vendor ID: 0000 00:19:37.920 Serial Number: f58a2e86dbcac1428b4e 00:19:37.920 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:37.920 Firmware Version: 6.8.9-20 00:19:37.920 Recommended Arb Burst: 6 00:19:37.920 IEEE OUI Identifier: 00 00 00 00:19:37.920 Multi-path I/O 00:19:37.920 May have multiple subsystem ports: Yes 00:19:37.920 May have multiple controllers: Yes 00:19:37.920 Associated with SR-IOV VF: No 00:19:37.920 Max Data Transfer Size: Unlimited 00:19:37.920 Max Number of Namespaces: 1024 00:19:37.920 Max Number of I/O Queues: 128 00:19:37.920 NVMe Specification Version (VS): 1.3 00:19:37.920 NVMe Specification Version (Identify): 1.3 00:19:37.920 Maximum Queue Entries: 1024 00:19:37.920 Contiguous Queues Required: No 00:19:37.920 Arbitration Mechanisms Supported 00:19:37.920 Weighted Round Robin: Not Supported 00:19:37.920 Vendor Specific: Not Supported 00:19:37.920 Reset Timeout: 7500 ms 00:19:37.920 Doorbell Stride: 4 bytes 00:19:37.920 NVM Subsystem Reset: Not Supported 00:19:37.920 Command Sets Supported 00:19:37.920 NVM Command Set: Supported 00:19:37.920 Boot Partition: Not Supported 00:19:37.920 Memory Page Size Minimum: 4096 bytes 00:19:37.920 Memory Page Size Maximum: 4096 bytes 00:19:37.920 Persistent Memory Region: Not Supported 00:19:37.920 Optional Asynchronous Events Supported 00:19:37.920 Namespace Attribute Notices: Supported 00:19:37.920 Firmware Activation Notices: Not Supported 00:19:37.920 ANA Change Notices: Supported 00:19:37.920 PLE Aggregate Log Change Notices: Not Supported 00:19:37.920 LBA Status Info Alert Notices: Not Supported 00:19:37.920 EGE Aggregate Log Change Notices: Not Supported 00:19:37.920 Normal NVM Subsystem Shutdown event: Not Supported 00:19:37.920 Zone Descriptor Change Notices: Not Supported 00:19:37.920 Discovery Log Change Notices: Not Supported 00:19:37.920 Controller Attributes 00:19:37.920 128-bit Host Identifier: Supported 00:19:37.920 Non-Operational Permissive Mode: Not Supported 00:19:37.920 NVM Sets: Not Supported 00:19:37.920 Read Recovery Levels: Not Supported 00:19:37.920 Endurance Groups: Not Supported 00:19:37.920 Predictable Latency Mode: Not Supported 00:19:37.920 Traffic Based Keep ALive: Supported 00:19:37.920 Namespace Granularity: Not Supported 00:19:37.920 SQ Associations: Not Supported 00:19:37.920 UUID List: Not Supported 00:19:37.920 Multi-Domain Subsystem: Not Supported 00:19:37.920 Fixed Capacity Management: Not Supported 00:19:37.920 Variable Capacity Management: Not Supported 00:19:37.920 Delete Endurance Group: Not Supported 00:19:37.920 Delete NVM Set: Not Supported 00:19:37.920 Extended LBA Formats Supported: Not Supported 00:19:37.920 Flexible Data Placement Supported: Not Supported 00:19:37.920 00:19:37.920 Controller Memory Buffer Support 00:19:37.920 ================================ 00:19:37.920 Supported: No 00:19:37.920 00:19:37.920 Persistent Memory Region Support 00:19:37.920 ================================ 00:19:37.920 Supported: No 00:19:37.920 00:19:37.920 Admin Command Set Attributes 00:19:37.920 ============================ 00:19:37.920 Security Send/Receive: Not Supported 00:19:37.920 Format NVM: Not Supported 00:19:37.920 Firmware Activate/Download: Not Supported 00:19:37.920 Namespace Management: Not Supported 00:19:37.920 Device Self-Test: Not Supported 00:19:37.920 Directives: Not Supported 00:19:37.920 NVMe-MI: Not Supported 00:19:37.920 Virtualization Management: Not Supported 00:19:37.920 Doorbell Buffer Config: Not Supported 00:19:37.920 Get LBA Status Capability: Not Supported 00:19:37.920 Command & Feature Lockdown Capability: Not Supported 00:19:37.920 Abort Command Limit: 4 00:19:37.920 Async Event Request Limit: 4 00:19:37.920 Number of Firmware Slots: N/A 00:19:37.920 Firmware Slot 1 Read-Only: N/A 00:19:37.920 Firmware Activation Without Reset: N/A 00:19:37.920 Multiple Update Detection Support: N/A 00:19:37.920 Firmware Update Granularity: No Information Provided 00:19:37.920 Per-Namespace SMART Log: Yes 00:19:37.920 Asymmetric Namespace Access Log Page: Supported 00:19:37.920 ANA Transition Time : 10 sec 00:19:37.920 00:19:37.920 Asymmetric Namespace Access Capabilities 00:19:37.920 ANA Optimized State : Supported 00:19:37.920 ANA Non-Optimized State : Supported 00:19:37.920 ANA Inaccessible State : Supported 00:19:37.920 ANA Persistent Loss State : Supported 00:19:37.920 ANA Change State : Supported 00:19:37.920 ANAGRPID is not changed : No 00:19:37.920 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:37.920 00:19:37.920 ANA Group Identifier Maximum : 128 00:19:37.920 Number of ANA Group Identifiers : 128 00:19:37.920 Max Number of Allowed Namespaces : 1024 00:19:37.920 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:37.920 Command Effects Log Page: Supported 00:19:37.920 Get Log Page Extended Data: Supported 00:19:37.920 Telemetry Log Pages: Not Supported 00:19:37.920 Persistent Event Log Pages: Not Supported 00:19:37.920 Supported Log Pages Log Page: May Support 00:19:37.920 Commands Supported & Effects Log Page: Not Supported 00:19:37.920 Feature Identifiers & Effects Log Page:May Support 00:19:37.920 NVMe-MI Commands & Effects Log Page: May Support 00:19:37.920 Data Area 4 for Telemetry Log: Not Supported 00:19:37.920 Error Log Page Entries Supported: 128 00:19:37.920 Keep Alive: Supported 00:19:37.920 Keep Alive Granularity: 1000 ms 00:19:37.920 00:19:37.920 NVM Command Set Attributes 00:19:37.920 ========================== 00:19:37.920 Submission Queue Entry Size 00:19:37.920 Max: 64 00:19:37.920 Min: 64 00:19:37.920 Completion Queue Entry Size 00:19:37.920 Max: 16 00:19:37.920 Min: 16 00:19:37.920 Number of Namespaces: 1024 00:19:37.920 Compare Command: Not Supported 00:19:37.921 Write Uncorrectable Command: Not Supported 00:19:37.921 Dataset Management Command: Supported 00:19:37.921 Write Zeroes Command: Supported 00:19:37.921 Set Features Save Field: Not Supported 00:19:37.921 Reservations: Not Supported 00:19:37.921 Timestamp: Not Supported 00:19:37.921 Copy: Not Supported 00:19:37.921 Volatile Write Cache: Present 00:19:37.921 Atomic Write Unit (Normal): 1 00:19:37.921 Atomic Write Unit (PFail): 1 00:19:37.921 Atomic Compare & Write Unit: 1 00:19:37.921 Fused Compare & Write: Not Supported 00:19:37.921 Scatter-Gather List 00:19:37.921 SGL Command Set: Supported 00:19:37.921 SGL Keyed: Not Supported 00:19:37.921 SGL Bit Bucket Descriptor: Not Supported 00:19:37.921 SGL Metadata Pointer: Not Supported 00:19:37.921 Oversized SGL: Not Supported 00:19:37.921 SGL Metadata Address: Not Supported 00:19:37.921 SGL Offset: Supported 00:19:37.921 Transport SGL Data Block: Not Supported 00:19:37.921 Replay Protected Memory Block: Not Supported 00:19:37.921 00:19:37.921 Firmware Slot Information 00:19:37.921 ========================= 00:19:37.921 Active slot: 0 00:19:37.921 00:19:37.921 Asymmetric Namespace Access 00:19:37.921 =========================== 00:19:37.921 Change Count : 0 00:19:37.921 Number of ANA Group Descriptors : 1 00:19:37.921 ANA Group Descriptor : 0 00:19:37.921 ANA Group ID : 1 00:19:37.921 Number of NSID Values : 1 00:19:37.921 Change Count : 0 00:19:37.921 ANA State : 1 00:19:37.921 Namespace Identifier : 1 00:19:37.921 00:19:37.921 Commands Supported and Effects 00:19:37.921 ============================== 00:19:37.921 Admin Commands 00:19:37.921 -------------- 00:19:37.921 Get Log Page (02h): Supported 00:19:37.921 Identify (06h): Supported 00:19:37.921 Abort (08h): Supported 00:19:37.921 Set Features (09h): Supported 00:19:37.921 Get Features (0Ah): Supported 00:19:37.921 Asynchronous Event Request (0Ch): Supported 00:19:37.921 Keep Alive (18h): Supported 00:19:37.921 I/O Commands 00:19:37.921 ------------ 00:19:37.921 Flush (00h): Supported 00:19:37.921 Write (01h): Supported LBA-Change 00:19:37.921 Read (02h): Supported 00:19:37.921 Write Zeroes (08h): Supported LBA-Change 00:19:37.921 Dataset Management (09h): Supported 00:19:37.921 00:19:37.921 Error Log 00:19:37.921 ========= 00:19:37.921 Entry: 0 00:19:37.921 Error Count: 0x3 00:19:37.921 Submission Queue Id: 0x0 00:19:37.921 Command Id: 0x5 00:19:37.921 Phase Bit: 0 00:19:37.921 Status Code: 0x2 00:19:37.921 Status Code Type: 0x0 00:19:37.921 Do Not Retry: 1 00:19:37.921 Error Location: 0x28 00:19:37.921 LBA: 0x0 00:19:37.921 Namespace: 0x0 00:19:37.921 Vendor Log Page: 0x0 00:19:37.921 ----------- 00:19:37.921 Entry: 1 00:19:37.921 Error Count: 0x2 00:19:37.921 Submission Queue Id: 0x0 00:19:37.921 Command Id: 0x5 00:19:37.921 Phase Bit: 0 00:19:37.921 Status Code: 0x2 00:19:37.921 Status Code Type: 0x0 00:19:37.921 Do Not Retry: 1 00:19:37.921 Error Location: 0x28 00:19:37.921 LBA: 0x0 00:19:37.921 Namespace: 0x0 00:19:37.921 Vendor Log Page: 0x0 00:19:37.921 ----------- 00:19:37.921 Entry: 2 00:19:37.921 Error Count: 0x1 00:19:37.921 Submission Queue Id: 0x0 00:19:37.921 Command Id: 0x4 00:19:37.921 Phase Bit: 0 00:19:37.921 Status Code: 0x2 00:19:37.921 Status Code Type: 0x0 00:19:37.921 Do Not Retry: 1 00:19:37.921 Error Location: 0x28 00:19:37.921 LBA: 0x0 00:19:37.921 Namespace: 0x0 00:19:37.921 Vendor Log Page: 0x0 00:19:37.921 00:19:37.921 Number of Queues 00:19:37.921 ================ 00:19:37.921 Number of I/O Submission Queues: 128 00:19:37.921 Number of I/O Completion Queues: 128 00:19:37.921 00:19:37.921 ZNS Specific Controller Data 00:19:37.921 ============================ 00:19:37.921 Zone Append Size Limit: 0 00:19:37.921 00:19:37.921 00:19:37.921 Active Namespaces 00:19:37.921 ================= 00:19:37.921 get_feature(0x05) failed 00:19:37.921 Namespace ID:1 00:19:37.921 Command Set Identifier: NVM (00h) 00:19:37.921 Deallocate: Supported 00:19:37.921 Deallocated/Unwritten Error: Not Supported 00:19:37.921 Deallocated Read Value: Unknown 00:19:37.921 Deallocate in Write Zeroes: Not Supported 00:19:37.921 Deallocated Guard Field: 0xFFFF 00:19:37.921 Flush: Supported 00:19:37.921 Reservation: Not Supported 00:19:37.921 Namespace Sharing Capabilities: Multiple Controllers 00:19:37.921 Size (in LBAs): 1310720 (5GiB) 00:19:37.921 Capacity (in LBAs): 1310720 (5GiB) 00:19:37.921 Utilization (in LBAs): 1310720 (5GiB) 00:19:37.921 UUID: 20886e41-4700-4d2a-b18d-416608721558 00:19:37.921 Thin Provisioning: Not Supported 00:19:37.921 Per-NS Atomic Units: Yes 00:19:37.921 Atomic Boundary Size (Normal): 0 00:19:37.921 Atomic Boundary Size (PFail): 0 00:19:37.921 Atomic Boundary Offset: 0 00:19:37.921 NGUID/EUI64 Never Reused: No 00:19:37.921 ANA group ID: 1 00:19:37.921 Namespace Write Protected: No 00:19:37.921 Number of LBA Formats: 1 00:19:37.921 Current LBA Format: LBA Format #00 00:19:37.921 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:19:37.921 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:37.921 rmmod nvme_tcp 00:19:37.921 rmmod nvme_fabrics 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:37.921 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:37.922 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:37.922 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:38.180 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:38.180 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:19:38.181 17:19:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:39.113 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:39.113 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:39.113 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:39.113 00:19:39.113 real 0m3.080s 00:19:39.113 user 0m1.146s 00:19:39.113 sys 0m1.373s 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.113 ************************************ 00:19:39.113 END TEST nvmf_identify_kernel_target 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.113 ************************************ 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.113 ************************************ 00:19:39.113 START TEST nvmf_auth_host 00:19:39.113 ************************************ 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:19:39.113 * Looking for test storage... 00:19:39.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:19:39.113 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:39.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.428 --rc genhtml_branch_coverage=1 00:19:39.428 --rc genhtml_function_coverage=1 00:19:39.428 --rc genhtml_legend=1 00:19:39.428 --rc geninfo_all_blocks=1 00:19:39.428 --rc geninfo_unexecuted_blocks=1 00:19:39.428 00:19:39.428 ' 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:39.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.428 --rc genhtml_branch_coverage=1 00:19:39.428 --rc genhtml_function_coverage=1 00:19:39.428 --rc genhtml_legend=1 00:19:39.428 --rc geninfo_all_blocks=1 00:19:39.428 --rc geninfo_unexecuted_blocks=1 00:19:39.428 00:19:39.428 ' 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:39.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.428 --rc genhtml_branch_coverage=1 00:19:39.428 --rc genhtml_function_coverage=1 00:19:39.428 --rc genhtml_legend=1 00:19:39.428 --rc geninfo_all_blocks=1 00:19:39.428 --rc geninfo_unexecuted_blocks=1 00:19:39.428 00:19:39.428 ' 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:39.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.428 --rc genhtml_branch_coverage=1 00:19:39.428 --rc genhtml_function_coverage=1 00:19:39.428 --rc genhtml_legend=1 00:19:39.428 --rc geninfo_all_blocks=1 00:19:39.428 --rc geninfo_unexecuted_blocks=1 00:19:39.428 00:19:39.428 ' 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.428 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.429 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:39.429 Cannot find device "nvmf_init_br" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:39.429 Cannot find device "nvmf_init_br2" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:39.429 Cannot find device "nvmf_tgt_br" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.429 Cannot find device "nvmf_tgt_br2" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:39.429 Cannot find device "nvmf_init_br" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:39.429 Cannot find device "nvmf_init_br2" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:39.429 Cannot find device "nvmf_tgt_br" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:39.429 Cannot find device "nvmf_tgt_br2" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:39.429 Cannot find device "nvmf_br" 00:19:39.429 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:39.430 Cannot find device "nvmf_init_if" 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:39.430 Cannot find device "nvmf_init_if2" 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.430 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:39.430 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:39.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:39.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:19:39.688 00:19:39.688 --- 10.0.0.3 ping statistics --- 00:19:39.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.688 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:39.688 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:39.688 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:19:39.688 00:19:39.688 --- 10.0.0.4 ping statistics --- 00:19:39.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.688 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:39.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:39.688 00:19:39.688 --- 10.0.0.1 ping statistics --- 00:19:39.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.688 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:39.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:19:39.688 00:19:39.688 --- 10.0.0.2 ping statistics --- 00:19:39.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.688 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=92057 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 92057 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92057 ']' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.688 17:19:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bac424f173734547db4f16f9405983ed 00:19:39.945 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fIL 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bac424f173734547db4f16f9405983ed 0 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bac424f173734547db4f16f9405983ed 0 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bac424f173734547db4f16f9405983ed 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fIL 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fIL 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.fIL 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b7a18924735948afd4a8f54d5f6468019903f4f857155143e39fae5cca243c19 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.TLv 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b7a18924735948afd4a8f54d5f6468019903f4f857155143e39fae5cca243c19 3 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b7a18924735948afd4a8f54d5f6468019903f4f857155143e39fae5cca243c19 3 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b7a18924735948afd4a8f54d5f6468019903f4f857155143e39fae5cca243c19 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.TLv 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.TLv 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.TLv 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6ee9eb9b1de815c6a735d436f4bdefc1f99b5447e9475e90 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4tl 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6ee9eb9b1de815c6a735d436f4bdefc1f99b5447e9475e90 0 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6ee9eb9b1de815c6a735d436f4bdefc1f99b5447e9475e90 0 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6ee9eb9b1de815c6a735d436f4bdefc1f99b5447e9475e90 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4tl 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4tl 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4tl 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a5c66db80ae40340266a58f78e14ead4d5617b149ab754a9 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ika 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a5c66db80ae40340266a58f78e14ead4d5617b149ab754a9 2 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a5c66db80ae40340266a58f78e14ead4d5617b149ab754a9 2 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a5c66db80ae40340266a58f78e14ead4d5617b149ab754a9 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ika 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ika 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ika 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=351070e95518449c78fa9adcf4417bed 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2Wi 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 351070e95518449c78fa9adcf4417bed 1 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 351070e95518449c78fa9adcf4417bed 1 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=351070e95518449c78fa9adcf4417bed 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:40.203 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2Wi 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2Wi 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.2Wi 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a964652eef37a26777ea0b9038ad08d0 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.2eo 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a964652eef37a26777ea0b9038ad08d0 1 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a964652eef37a26777ea0b9038ad08d0 1 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a964652eef37a26777ea0b9038ad08d0 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.2eo 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.2eo 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.2eo 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5297415fdbc6cf286d0bbe90d841aa5c20fe0e2479616be2 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VKF 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5297415fdbc6cf286d0bbe90d841aa5c20fe0e2479616be2 2 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5297415fdbc6cf286d0bbe90d841aa5c20fe0e2479616be2 2 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5297415fdbc6cf286d0bbe90d841aa5c20fe0e2479616be2 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VKF 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VKF 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.VKF 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ab49ca05a2ed248418f44024aceabf23 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.5pW 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ab49ca05a2ed248418f44024aceabf23 0 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ab49ca05a2ed248418f44024aceabf23 0 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ab49ca05a2ed248418f44024aceabf23 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.5pW 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.5pW 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.5pW 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=43f470ee212436c159619a69e270af56b7557cd76aa986496c8b42f570b77c07 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.VVU 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 43f470ee212436c159619a69e270af56b7557cd76aa986496c8b42f570b77c07 3 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 43f470ee212436c159619a69e270af56b7557cd76aa986496c8b42f570b77c07 3 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=43f470ee212436c159619a69e270af56b7557cd76aa986496c8b42f570b77c07 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:40.461 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.VVU 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.VVU 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.VVU 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92057 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 92057 ']' 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.462 17:19:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fIL 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.TLv ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.TLv 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4tl 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ika ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ika 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.2Wi 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.2eo ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.2eo 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.VKF 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.5pW ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.5pW 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.VVU 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:41.026 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:41.284 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:41.284 Waiting for block devices as requested 00:19:41.284 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.284 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:41.850 17:19:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:19:41.850 No valid GPT data, bailing 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:19:41.850 No valid GPT data, bailing 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:19:41.850 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:19:42.108 No valid GPT data, bailing 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:19:42.108 No valid GPT data, bailing 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:42.108 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -a 10.0.0.1 -t tcp -s 4420 00:19:42.109 00:19:42.109 Discovery Log Number of Records 2, Generation counter 2 00:19:42.109 =====Discovery Log Entry 0====== 00:19:42.109 trtype: tcp 00:19:42.109 adrfam: ipv4 00:19:42.109 subtype: current discovery subsystem 00:19:42.109 treq: not specified, sq flow control disable supported 00:19:42.109 portid: 1 00:19:42.109 trsvcid: 4420 00:19:42.109 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:42.109 traddr: 10.0.0.1 00:19:42.109 eflags: none 00:19:42.109 sectype: none 00:19:42.109 =====Discovery Log Entry 1====== 00:19:42.109 trtype: tcp 00:19:42.109 adrfam: ipv4 00:19:42.109 subtype: nvme subsystem 00:19:42.109 treq: not specified, sq flow control disable supported 00:19:42.109 portid: 1 00:19:42.109 trsvcid: 4420 00:19:42.109 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:42.109 traddr: 10.0.0.1 00:19:42.109 eflags: none 00:19:42.109 sectype: none 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.109 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.368 nvme0n1 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.368 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.627 nvme0n1 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.627 nvme0n1 00:19:42.627 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.628 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.887 17:19:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.887 nvme0n1 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.887 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.888 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.146 nvme0n1 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.146 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.147 nvme0n1 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.147 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.726 nvme0n1 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.726 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.727 17:19:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.984 nvme0n1 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:43.984 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.985 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.243 nvme0n1 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.243 nvme0n1 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.243 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.503 nvme0n1 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:44.503 17:19:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.070 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:45.070 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:45.070 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:45.070 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:45.070 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.070 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.070 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.070 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.071 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.329 nvme0n1 00:19:45.329 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.329 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.329 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.329 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.329 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.330 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.589 nvme0n1 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.589 17:19:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.848 nvme0n1 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.848 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.107 nvme0n1 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.107 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.366 nvme0n1 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:46.366 17:19:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.276 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.535 nvme0n1 00:19:48.535 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.535 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.535 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.535 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.535 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.535 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.794 17:19:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.051 nvme0n1 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.051 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.052 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.617 nvme0n1 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.617 17:19:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.875 nvme0n1 00:19:49.875 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.875 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.875 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.875 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.875 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.875 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.132 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.132 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.133 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.392 nvme0n1 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.392 17:19:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.329 nvme0n1 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.329 17:19:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.897 nvme0n1 00:19:51.897 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.897 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.897 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.897 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.897 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.897 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.156 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.724 nvme0n1 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.724 17:19:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.289 nvme0n1 00:19:53.289 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.549 17:19:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.176 nvme0n1 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.176 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.177 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.435 nvme0n1 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.435 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.436 nvme0n1 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.436 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.696 nvme0n1 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.696 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.697 17:19:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.955 nvme0n1 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.955 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.956 nvme0n1 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.956 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 nvme0n1 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.214 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.215 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.215 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.215 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 nvme0n1 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:55.472 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.473 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 nvme0n1 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 nvme0n1 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.731 17:19:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.731 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.991 nvme0n1 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.991 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.992 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.251 nvme0n1 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.251 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.510 nvme0n1 00:19:56.510 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.510 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.510 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.510 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.510 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.510 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:56.768 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.769 17:19:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.769 nvme0n1 00:19:56.769 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.769 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.769 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.769 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.769 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.769 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:57.027 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.028 nvme0n1 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.028 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.287 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.546 nvme0n1 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.546 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.547 17:19:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.804 nvme0n1 00:19:57.804 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.804 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.804 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.804 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.804 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.063 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.322 nvme0n1 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.322 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.889 nvme0n1 00:19:58.889 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.889 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.889 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.889 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.889 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.889 17:19:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:58.889 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.890 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.148 nvme0n1 00:19:59.148 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.148 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.148 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.148 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.148 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.148 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.407 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:59.408 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.408 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.667 nvme0n1 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.667 17:19:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 nvme0n1 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.604 17:19:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.172 nvme0n1 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.172 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.739 nvme0n1 00:20:01.739 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.739 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.739 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.739 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.739 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.739 17:19:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.004 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.585 nvme0n1 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:02.585 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.586 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:02.586 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:02.586 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:02.586 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:02.586 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.586 17:19:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.152 nvme0n1 00:20:03.152 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.152 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.152 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.152 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.152 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.152 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.410 nvme0n1 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:03.410 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.411 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.670 nvme0n1 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.670 nvme0n1 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.670 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:03.929 17:19:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.929 nvme0n1 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.929 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.188 nvme0n1 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.188 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.447 nvme0n1 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:04.447 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.448 nvme0n1 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.448 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.707 nvme0n1 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.707 17:19:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.966 nvme0n1 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.966 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.224 nvme0n1 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:05.224 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.225 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.483 nvme0n1 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.483 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.741 nvme0n1 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.741 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.742 17:19:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.999 nvme0n1 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.999 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.257 nvme0n1 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.257 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.515 nvme0n1 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.515 17:19:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.081 nvme0n1 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.081 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.082 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.082 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.082 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.082 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.082 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.082 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.082 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.082 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.340 nvme0n1 00:20:07.340 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.340 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.340 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.340 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.340 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.340 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.597 17:19:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.855 nvme0n1 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.855 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.856 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.422 nvme0n1 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.422 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.680 nvme0n1 00:20:08.680 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.680 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.680 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:08.680 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.680 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.680 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmFjNDI0ZjE3MzczNDU0N2RiNGYxNmY5NDA1OTgzZWQoIlqk: 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: ]] 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjdhMTg5MjQ3MzU5NDhhZmQ0YThmNTRkNWY2NDY4MDE5OTAzZjRmODU3MTU1MTQzZTM5ZmFlNWNjYTI0M2MxOccl1f0=: 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:08.939 17:19:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.939 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.505 nvme0n1 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.505 17:19:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.071 nvme0n1 00:20:10.071 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.071 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.071 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.071 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.071 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.071 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.348 17:19:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 nvme0n1 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTI5NzQxNWZkYmM2Y2YyODZkMGJiZTkwZDg0MWFhNWMyMGZlMGUyNDc5NjE2YmUyOVM2Ig==: 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWI0OWNhMDVhMmVkMjQ4NDE4ZjQ0MDI0YWNlYWJmMjMSNkTi: 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.944 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.511 nvme0n1 00:20:11.511 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.511 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:11.511 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.511 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.511 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:11.511 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.769 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.769 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:11.769 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.769 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.769 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDNmNDcwZWUyMTI0MzZjMTU5NjE5YTY5ZTI3MGFmNTZiNzU1N2NkNzZhYTk4NjQ5NmM4YjQyZjU3MGI3N2MwN2MR9M0=: 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.770 17:19:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.353 nvme0n1 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.353 2024/12/06 17:19:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:12.353 request: 00:20:12.353 { 00:20:12.353 "method": "bdev_nvme_attach_controller", 00:20:12.353 "params": { 00:20:12.353 "name": "nvme0", 00:20:12.353 "trtype": "tcp", 00:20:12.353 "traddr": "10.0.0.1", 00:20:12.353 "adrfam": "ipv4", 00:20:12.353 "trsvcid": "4420", 00:20:12.353 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:12.353 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:12.353 "prchk_reftag": false, 00:20:12.353 "prchk_guard": false, 00:20:12.353 "hdgst": false, 00:20:12.353 "ddgst": false, 00:20:12.353 "allow_unrecognized_csi": false 00:20:12.353 } 00:20:12.353 } 00:20:12.353 Got JSON-RPC error response 00:20:12.353 GoRPCClient: error on JSON-RPC call 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.353 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.612 2024/12/06 17:19:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:12.612 request: 00:20:12.612 { 00:20:12.612 "method": "bdev_nvme_attach_controller", 00:20:12.612 "params": { 00:20:12.612 "name": "nvme0", 00:20:12.612 "trtype": "tcp", 00:20:12.612 "traddr": "10.0.0.1", 00:20:12.612 "adrfam": "ipv4", 00:20:12.612 "trsvcid": "4420", 00:20:12.612 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:12.612 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:12.612 "prchk_reftag": false, 00:20:12.612 "prchk_guard": false, 00:20:12.612 "hdgst": false, 00:20:12.612 "ddgst": false, 00:20:12.612 "dhchap_key": "key2", 00:20:12.612 "allow_unrecognized_csi": false 00:20:12.612 } 00:20:12.612 } 00:20:12.612 Got JSON-RPC error response 00:20:12.612 GoRPCClient: error on JSON-RPC call 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.612 2024/12/06 17:19:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:20:12.612 request: 00:20:12.612 { 00:20:12.612 "method": "bdev_nvme_attach_controller", 00:20:12.612 "params": { 00:20:12.612 "name": "nvme0", 00:20:12.612 "trtype": "tcp", 00:20:12.612 "traddr": "10.0.0.1", 00:20:12.612 "adrfam": "ipv4", 00:20:12.612 "trsvcid": "4420", 00:20:12.612 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:12.612 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:12.612 "prchk_reftag": false, 00:20:12.612 "prchk_guard": false, 00:20:12.612 "hdgst": false, 00:20:12.612 "ddgst": false, 00:20:12.612 "dhchap_key": "key1", 00:20:12.612 "dhchap_ctrlr_key": "ckey2", 00:20:12.612 "allow_unrecognized_csi": false 00:20:12.612 } 00:20:12.612 } 00:20:12.612 Got JSON-RPC error response 00:20:12.612 GoRPCClient: error on JSON-RPC call 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.612 nvme0n1 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.612 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.870 2024/12/06 17:19:54 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:12.870 request: 00:20:12.870 { 00:20:12.870 "method": "bdev_nvme_set_keys", 00:20:12.870 "params": { 00:20:12.870 "name": "nvme0", 00:20:12.870 "dhchap_key": "key1", 00:20:12.870 "dhchap_ctrlr_key": "ckey2" 00:20:12.870 } 00:20:12.870 } 00:20:12.870 Got JSON-RPC error response 00:20:12.870 GoRPCClient: error on JSON-RPC call 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.870 17:19:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:12.870 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.870 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:12.870 17:19:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmVlOWViOWIxZGU4MTVjNmE3MzVkNDM2ZjRiZGVmYzFmOTliNTQ0N2U5NDc1ZTkwovb8yQ==: 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: ]] 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YTVjNjZkYjgwYWU0MDM0MDI2NmE1OGY3OGUxNGVhZDRkNTYxN2IxNDlhYjc1NGE5EsKVIw==: 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.800 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 nvme0n1 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzUxMDcwZTk1NTE4NDQ5Yzc4ZmE5YWRjZjQ0MTdiZWRuOG43: 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: ]] 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YTk2NDY1MmVlZjM3YTI2Nzc3ZWEwYjkwMzhhZDA4ZDAdGMN0: 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 2024/12/06 17:19:56 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:20:14.119 request: 00:20:14.119 { 00:20:14.119 "method": "bdev_nvme_set_keys", 00:20:14.119 "params": { 00:20:14.119 "name": "nvme0", 00:20:14.119 "dhchap_key": "key2", 00:20:14.119 "dhchap_ctrlr_key": "ckey1" 00:20:14.119 } 00:20:14.119 } 00:20:14.119 Got JSON-RPC error response 00:20:14.119 GoRPCClient: error on JSON-RPC call 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:14.119 17:19:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:15.049 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:15.306 rmmod nvme_tcp 00:20:15.306 rmmod nvme_fabrics 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 92057 ']' 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 92057 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 92057 ']' 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 92057 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92057 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:15.306 killing process with pid 92057 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92057' 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 92057 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 92057 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:15.306 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:15.564 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:15.821 17:19:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:16.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:16.388 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.388 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:16.388 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.fIL /tmp/spdk.key-null.4tl /tmp/spdk.key-sha256.2Wi /tmp/spdk.key-sha384.VKF /tmp/spdk.key-sha512.VVU /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:20:16.646 17:19:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:16.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:16.905 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:16.905 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:16.905 00:20:16.905 real 0m37.753s 00:20:16.905 user 0m33.642s 00:20:16.905 sys 0m3.457s 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.905 ************************************ 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.905 END TEST nvmf_auth_host 00:20:16.905 ************************************ 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:16.905 ************************************ 00:20:16.905 START TEST nvmf_digest 00:20:16.905 ************************************ 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:16.905 * Looking for test storage... 00:20:16.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:20:16.905 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:17.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.164 --rc genhtml_branch_coverage=1 00:20:17.164 --rc genhtml_function_coverage=1 00:20:17.164 --rc genhtml_legend=1 00:20:17.164 --rc geninfo_all_blocks=1 00:20:17.164 --rc geninfo_unexecuted_blocks=1 00:20:17.164 00:20:17.164 ' 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:17.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.164 --rc genhtml_branch_coverage=1 00:20:17.164 --rc genhtml_function_coverage=1 00:20:17.164 --rc genhtml_legend=1 00:20:17.164 --rc geninfo_all_blocks=1 00:20:17.164 --rc geninfo_unexecuted_blocks=1 00:20:17.164 00:20:17.164 ' 00:20:17.164 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:17.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.165 --rc genhtml_branch_coverage=1 00:20:17.165 --rc genhtml_function_coverage=1 00:20:17.165 --rc genhtml_legend=1 00:20:17.165 --rc geninfo_all_blocks=1 00:20:17.165 --rc geninfo_unexecuted_blocks=1 00:20:17.165 00:20:17.165 ' 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:17.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:17.165 --rc genhtml_branch_coverage=1 00:20:17.165 --rc genhtml_function_coverage=1 00:20:17.165 --rc genhtml_legend=1 00:20:17.165 --rc geninfo_all_blocks=1 00:20:17.165 --rc geninfo_unexecuted_blocks=1 00:20:17.165 00:20:17.165 ' 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:17.165 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:17.165 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:17.166 Cannot find device "nvmf_init_br" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:17.166 Cannot find device "nvmf_init_br2" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:17.166 Cannot find device "nvmf_tgt_br" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:17.166 Cannot find device "nvmf_tgt_br2" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:17.166 Cannot find device "nvmf_init_br" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:17.166 Cannot find device "nvmf_init_br2" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:17.166 Cannot find device "nvmf_tgt_br" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:17.166 Cannot find device "nvmf_tgt_br2" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:17.166 Cannot find device "nvmf_br" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:17.166 Cannot find device "nvmf_init_if" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:17.166 Cannot find device "nvmf_init_if2" 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:17.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:17.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:17.166 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:17.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:17.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:20:17.472 00:20:17.472 --- 10.0.0.3 ping statistics --- 00:20:17.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.472 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:17.472 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:17.472 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:20:17.472 00:20:17.472 --- 10.0.0.4 ping statistics --- 00:20:17.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.472 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:17.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:17.472 00:20:17.472 --- 10.0.0.1 ping statistics --- 00:20:17.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.472 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:17.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:20:17.472 00:20:17.472 --- 10.0.0.2 ping statistics --- 00:20:17.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.472 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:17.472 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:17.473 ************************************ 00:20:17.473 START TEST nvmf_digest_clean 00:20:17.473 ************************************ 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=93730 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 93730 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93730 ']' 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.473 17:19:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:17.473 [2024-12-06 17:19:59.740098] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:17.473 [2024-12-06 17:19:59.740183] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.731 [2024-12-06 17:19:59.887107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.731 [2024-12-06 17:19:59.924943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.731 [2024-12-06 17:19:59.925008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.731 [2024-12-06 17:19:59.925022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.731 [2024-12-06 17:19:59.925033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.731 [2024-12-06 17:19:59.925043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.731 [2024-12-06 17:19:59.925421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.731 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.731 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:17.731 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:17.731 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:17.731 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:17.990 null0 00:20:17.990 [2024-12-06 17:20:00.128327] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.990 [2024-12-06 17:20:00.152490] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93766 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93766 /var/tmp/bperf.sock 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93766 ']' 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:17.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:17.990 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:17.990 [2024-12-06 17:20:00.212461] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:17.990 [2024-12-06 17:20:00.212548] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93766 ] 00:20:18.248 [2024-12-06 17:20:00.359253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.248 [2024-12-06 17:20:00.391854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.248 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:18.248 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:18.248 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:18.248 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:18.248 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:18.816 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:18.816 17:20:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:19.074 nvme0n1 00:20:19.074 17:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:19.074 17:20:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:19.332 Running I/O for 2 seconds... 00:20:21.208 17786.00 IOPS, 69.48 MiB/s [2024-12-06T17:20:03.506Z] 17880.00 IOPS, 69.84 MiB/s 00:20:21.208 Latency(us) 00:20:21.208 [2024-12-06T17:20:03.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.208 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:21.208 nvme0n1 : 2.00 17903.26 69.93 0.00 0.00 7141.62 3813.00 12571.00 00:20:21.208 [2024-12-06T17:20:03.506Z] =================================================================================================================== 00:20:21.208 [2024-12-06T17:20:03.506Z] Total : 17903.26 69.93 0.00 0.00 7141.62 3813.00 12571.00 00:20:21.208 { 00:20:21.208 "results": [ 00:20:21.208 { 00:20:21.208 "job": "nvme0n1", 00:20:21.208 "core_mask": "0x2", 00:20:21.208 "workload": "randread", 00:20:21.208 "status": "finished", 00:20:21.208 "queue_depth": 128, 00:20:21.208 "io_size": 4096, 00:20:21.209 "runtime": 2.004551, 00:20:21.209 "iops": 17903.261129300277, 00:20:21.209 "mibps": 69.93461378632921, 00:20:21.209 "io_failed": 0, 00:20:21.209 "io_timeout": 0, 00:20:21.209 "avg_latency_us": 7141.618708709925, 00:20:21.209 "min_latency_us": 3813.0036363636364, 00:20:21.209 "max_latency_us": 12570.996363636363 00:20:21.209 } 00:20:21.209 ], 00:20:21.209 "core_count": 1 00:20:21.209 } 00:20:21.209 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:21.209 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:21.209 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:21.209 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:21.209 | select(.opcode=="crc32c") 00:20:21.209 | "\(.module_name) \(.executed)"' 00:20:21.209 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93766 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93766 ']' 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93766 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93766 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:21.467 killing process with pid 93766 00:20:21.467 Received shutdown signal, test time was about 2.000000 seconds 00:20:21.467 00:20:21.467 Latency(us) 00:20:21.467 [2024-12-06T17:20:03.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.467 [2024-12-06T17:20:03.765Z] =================================================================================================================== 00:20:21.467 [2024-12-06T17:20:03.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93766' 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93766 00:20:21.467 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93766 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93843 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93843 /var/tmp/bperf.sock 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93843 ']' 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.726 17:20:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:21.726 [2024-12-06 17:20:03.979093] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:21.726 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:21.726 Zero copy mechanism will not be used. 00:20:21.726 [2024-12-06 17:20:03.979226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93843 ] 00:20:21.984 [2024-12-06 17:20:04.132923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.984 [2024-12-06 17:20:04.166355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:23.024 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.024 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:23.024 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:23.024 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:23.024 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:23.281 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.281 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:23.539 nvme0n1 00:20:23.539 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:23.539 17:20:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:23.796 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:23.796 Zero copy mechanism will not be used. 00:20:23.796 Running I/O for 2 seconds... 00:20:25.662 7739.00 IOPS, 967.38 MiB/s [2024-12-06T17:20:07.960Z] 7668.00 IOPS, 958.50 MiB/s 00:20:25.662 Latency(us) 00:20:25.662 [2024-12-06T17:20:07.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.662 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:25.662 nvme0n1 : 2.00 7662.96 957.87 0.00 0.00 2083.93 636.74 11379.43 00:20:25.662 [2024-12-06T17:20:07.960Z] =================================================================================================================== 00:20:25.662 [2024-12-06T17:20:07.960Z] Total : 7662.96 957.87 0.00 0.00 2083.93 636.74 11379.43 00:20:25.662 { 00:20:25.662 "results": [ 00:20:25.662 { 00:20:25.662 "job": "nvme0n1", 00:20:25.662 "core_mask": "0x2", 00:20:25.662 "workload": "randread", 00:20:25.662 "status": "finished", 00:20:25.662 "queue_depth": 16, 00:20:25.662 "io_size": 131072, 00:20:25.662 "runtime": 2.003404, 00:20:25.662 "iops": 7662.957646086361, 00:20:25.662 "mibps": 957.8697057607951, 00:20:25.662 "io_failed": 0, 00:20:25.662 "io_timeout": 0, 00:20:25.662 "avg_latency_us": 2083.9287735089297, 00:20:25.662 "min_latency_us": 636.7418181818182, 00:20:25.662 "max_latency_us": 11379.432727272728 00:20:25.662 } 00:20:25.662 ], 00:20:25.662 "core_count": 1 00:20:25.662 } 00:20:25.662 17:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:25.662 17:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:25.662 17:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:25.662 17:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:25.662 | select(.opcode=="crc32c") 00:20:25.662 | "\(.module_name) \(.executed)"' 00:20:25.663 17:20:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93843 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93843 ']' 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93843 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93843 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93843' 00:20:25.921 killing process with pid 93843 00:20:25.921 Received shutdown signal, test time was about 2.000000 seconds 00:20:25.921 00:20:25.921 Latency(us) 00:20:25.921 [2024-12-06T17:20:08.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:25.921 [2024-12-06T17:20:08.219Z] =================================================================================================================== 00:20:25.921 [2024-12-06T17:20:08.219Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93843 00:20:25.921 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93843 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93928 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93928 /var/tmp/bperf.sock 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93928 ']' 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:20:26.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.180 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:26.180 [2024-12-06 17:20:08.378897] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:26.180 [2024-12-06 17:20:08.378982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93928 ] 00:20:26.438 [2024-12-06 17:20:08.527421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.438 [2024-12-06 17:20:08.567731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.438 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.438 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:26.438 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:26.438 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:26.438 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:26.696 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:26.696 17:20:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:27.263 nvme0n1 00:20:27.263 17:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:27.263 17:20:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:27.263 Running I/O for 2 seconds... 00:20:29.573 20850.00 IOPS, 81.45 MiB/s [2024-12-06T17:20:11.871Z] 21029.50 IOPS, 82.15 MiB/s 00:20:29.573 Latency(us) 00:20:29.573 [2024-12-06T17:20:11.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.573 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:29.573 nvme0n1 : 2.01 21033.88 82.16 0.00 0.00 6076.05 2651.23 12273.11 00:20:29.573 [2024-12-06T17:20:11.871Z] =================================================================================================================== 00:20:29.573 [2024-12-06T17:20:11.871Z] Total : 21033.88 82.16 0.00 0.00 6076.05 2651.23 12273.11 00:20:29.573 { 00:20:29.573 "results": [ 00:20:29.573 { 00:20:29.573 "job": "nvme0n1", 00:20:29.573 "core_mask": "0x2", 00:20:29.573 "workload": "randwrite", 00:20:29.573 "status": "finished", 00:20:29.573 "queue_depth": 128, 00:20:29.573 "io_size": 4096, 00:20:29.573 "runtime": 2.005669, 00:20:29.573 "iops": 21033.879468646122, 00:20:29.573 "mibps": 82.16359167439892, 00:20:29.573 "io_failed": 0, 00:20:29.573 "io_timeout": 0, 00:20:29.573 "avg_latency_us": 6076.051116824011, 00:20:29.573 "min_latency_us": 2651.2290909090907, 00:20:29.573 "max_latency_us": 12273.105454545455 00:20:29.573 } 00:20:29.573 ], 00:20:29.573 "core_count": 1 00:20:29.573 } 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:29.573 | select(.opcode=="crc32c") 00:20:29.573 | "\(.module_name) \(.executed)"' 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93928 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93928 ']' 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93928 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93928 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:29.573 killing process with pid 93928 00:20:29.573 Received shutdown signal, test time was about 2.000000 seconds 00:20:29.573 00:20:29.573 Latency(us) 00:20:29.573 [2024-12-06T17:20:11.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:29.573 [2024-12-06T17:20:11.871Z] =================================================================================================================== 00:20:29.573 [2024-12-06T17:20:11.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93928' 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93928 00:20:29.573 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93928 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94007 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94007 /var/tmp/bperf.sock 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 94007 ']' 00:20:29.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.832 17:20:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:29.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:29.832 Zero copy mechanism will not be used. 00:20:29.832 [2024-12-06 17:20:12.047048] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:29.832 [2024-12-06 17:20:12.047139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94007 ] 00:20:30.091 [2024-12-06 17:20:12.186924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.091 [2024-12-06 17:20:12.220564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.091 17:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:30.091 17:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:20:30.091 17:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:20:30.091 17:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:20:30.091 17:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:20:30.658 17:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.658 17:20:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:30.915 nvme0n1 00:20:30.915 17:20:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:20:30.915 17:20:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:30.915 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:30.915 Zero copy mechanism will not be used. 00:20:30.915 Running I/O for 2 seconds... 00:20:33.221 6702.00 IOPS, 837.75 MiB/s [2024-12-06T17:20:15.519Z] 6765.00 IOPS, 845.62 MiB/s 00:20:33.221 Latency(us) 00:20:33.221 [2024-12-06T17:20:15.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.221 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:33.221 nvme0n1 : 2.00 6762.02 845.25 0.00 0.00 2360.29 1630.95 6345.08 00:20:33.221 [2024-12-06T17:20:15.519Z] =================================================================================================================== 00:20:33.221 [2024-12-06T17:20:15.519Z] Total : 6762.02 845.25 0.00 0.00 2360.29 1630.95 6345.08 00:20:33.221 { 00:20:33.221 "results": [ 00:20:33.221 { 00:20:33.221 "job": "nvme0n1", 00:20:33.221 "core_mask": "0x2", 00:20:33.221 "workload": "randwrite", 00:20:33.221 "status": "finished", 00:20:33.221 "queue_depth": 16, 00:20:33.221 "io_size": 131072, 00:20:33.221 "runtime": 2.003248, 00:20:33.221 "iops": 6762.018481985256, 00:20:33.221 "mibps": 845.252310248157, 00:20:33.221 "io_failed": 0, 00:20:33.221 "io_timeout": 0, 00:20:33.221 "avg_latency_us": 2360.288181683959, 00:20:33.221 "min_latency_us": 1630.9527272727273, 00:20:33.221 "max_latency_us": 6345.076363636364 00:20:33.221 } 00:20:33.221 ], 00:20:33.221 "core_count": 1 00:20:33.221 } 00:20:33.221 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:20:33.221 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:20:33.221 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:20:33.221 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:20:33.221 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:20:33.221 | select(.opcode=="crc32c") 00:20:33.221 | "\(.module_name) \(.executed)"' 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94007 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 94007 ']' 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 94007 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94007 00:20:33.479 killing process with pid 94007 00:20:33.479 Received shutdown signal, test time was about 2.000000 seconds 00:20:33.479 00:20:33.479 Latency(us) 00:20:33.479 [2024-12-06T17:20:15.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.479 [2024-12-06T17:20:15.777Z] =================================================================================================================== 00:20:33.479 [2024-12-06T17:20:15.777Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94007' 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 94007 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 94007 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93730 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93730 ']' 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93730 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.479 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93730 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.736 killing process with pid 93730 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93730' 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93730 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93730 00:20:33.736 00:20:33.736 real 0m16.242s 00:20:33.736 user 0m32.310s 00:20:33.736 sys 0m4.114s 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:20:33.736 ************************************ 00:20:33.736 END TEST nvmf_digest_clean 00:20:33.736 ************************************ 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:33.736 ************************************ 00:20:33.736 START TEST nvmf_digest_error 00:20:33.736 ************************************ 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=94111 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 94111 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94111 ']' 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.736 17:20:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:33.993 [2024-12-06 17:20:16.039101] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:33.993 [2024-12-06 17:20:16.039191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.993 [2024-12-06 17:20:16.186412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.993 [2024-12-06 17:20:16.218752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.993 [2024-12-06 17:20:16.218808] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.993 [2024-12-06 17:20:16.218819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.993 [2024-12-06 17:20:16.218828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.993 [2024-12-06 17:20:16.218835] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.993 [2024-12-06 17:20:16.219164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.252 [2024-12-06 17:20:16.347593] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.252 null0 00:20:34.252 [2024-12-06 17:20:16.423503] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.252 [2024-12-06 17:20:16.447640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94137 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94137 /var/tmp/bperf.sock 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94137 ']' 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:34.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.252 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:34.252 [2024-12-06 17:20:16.511775] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:34.252 [2024-12-06 17:20:16.511884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94137 ] 00:20:34.511 [2024-12-06 17:20:16.659239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.511 [2024-12-06 17:20:16.701116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.769 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.769 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:34.769 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:34.769 17:20:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:35.029 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:35.029 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.029 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:35.029 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.029 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.029 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:35.288 nvme0n1 00:20:35.288 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:35.288 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.288 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:35.288 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.288 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:35.288 17:20:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:35.546 Running I/O for 2 seconds... 00:20:35.546 [2024-12-06 17:20:17.755388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.546 [2024-12-06 17:20:17.755443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.546 [2024-12-06 17:20:17.755459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.546 [2024-12-06 17:20:17.769877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.546 [2024-12-06 17:20:17.769916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.546 [2024-12-06 17:20:17.769931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.546 [2024-12-06 17:20:17.782643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.546 [2024-12-06 17:20:17.782682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.546 [2024-12-06 17:20:17.782696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.546 [2024-12-06 17:20:17.797407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.546 [2024-12-06 17:20:17.797445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.546 [2024-12-06 17:20:17.797459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.546 [2024-12-06 17:20:17.813837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.546 [2024-12-06 17:20:17.813878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.546 [2024-12-06 17:20:17.813893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.546 [2024-12-06 17:20:17.828426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.546 [2024-12-06 17:20:17.828466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.546 [2024-12-06 17:20:17.828481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.546 [2024-12-06 17:20:17.842573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.546 [2024-12-06 17:20:17.842612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.546 [2024-12-06 17:20:17.842637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.856889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.856928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.856942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.871322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.871360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.871375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.886862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.886902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.886916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.902823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.902862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.902877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.915550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.915589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.915603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.931497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.931537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.931552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.943715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.943758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.943772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.958071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.958114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.958130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.974223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.974274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.805 [2024-12-06 17:20:17.974292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.805 [2024-12-06 17:20:17.989332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.805 [2024-12-06 17:20:17.989386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.806 [2024-12-06 17:20:17.989403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.806 [2024-12-06 17:20:18.004632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.806 [2024-12-06 17:20:18.004675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.806 [2024-12-06 17:20:18.004690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.806 [2024-12-06 17:20:18.017316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.806 [2024-12-06 17:20:18.017362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.806 [2024-12-06 17:20:18.017378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.806 [2024-12-06 17:20:18.035522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.806 [2024-12-06 17:20:18.035570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.806 [2024-12-06 17:20:18.035585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.806 [2024-12-06 17:20:18.050290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.806 [2024-12-06 17:20:18.050333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.806 [2024-12-06 17:20:18.050348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.806 [2024-12-06 17:20:18.062728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.806 [2024-12-06 17:20:18.062769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.806 [2024-12-06 17:20:18.062784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.806 [2024-12-06 17:20:18.077700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.806 [2024-12-06 17:20:18.077753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.806 [2024-12-06 17:20:18.077767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:35.806 [2024-12-06 17:20:18.095082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:35.806 [2024-12-06 17:20:18.095124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:35.806 [2024-12-06 17:20:18.095138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.109181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.109223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.109237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.123845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.123894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.123909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.139588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.139629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.139643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.154365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.154403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.154417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.168990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.169033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.169048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.182595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.182645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.182662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.197901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.197944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.197959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.213680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.213721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.213736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.227886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.227926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.227941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.240690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.240731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.240751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.256063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.256103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.256117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.270890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.270930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.270944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.285221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.285279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.285296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.299741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.299782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.299797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.314319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.314357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.314372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.328656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.328695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:8477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.328711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.340502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.340543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.340558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.065 [2024-12-06 17:20:18.355924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.065 [2024-12-06 17:20:18.355963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.065 [2024-12-06 17:20:18.355978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.370414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.370453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.370468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.385067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.385106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.385121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.400786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.400826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.400840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.415730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.415768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.415784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.431214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.431254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.431282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.446033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.446075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.446090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.460204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.460243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.460258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.474860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.474903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.474919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.487912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.487952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.487966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.502523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.502565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.502579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.516972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.517014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.517028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.531365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.531404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.531419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.545646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.545686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.545700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.560426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.560464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.560478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.574904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.574942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.574957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.589036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.589075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.589089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.603858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.603898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.603912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.325 [2024-12-06 17:20:18.619184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.325 [2024-12-06 17:20:18.619223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.325 [2024-12-06 17:20:18.619238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.632494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.632533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.632548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.647074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.647113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.647129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.661343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.661396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.661411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.676625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.676668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.676683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.691060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.691099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.691114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.704816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.704856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.704871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.720375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.720413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.720439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 17188.00 IOPS, 67.14 MiB/s [2024-12-06T17:20:18.883Z] [2024-12-06 17:20:18.737055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.737093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.737108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.751330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.751368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.751383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.768031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.768071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.768085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.780494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.780533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.780549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.795015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.795084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.795100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.810243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.810331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.810348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.825445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.825514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.585 [2024-12-06 17:20:18.825531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.585 [2024-12-06 17:20:18.840382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.585 [2024-12-06 17:20:18.840440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.586 [2024-12-06 17:20:18.840456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.586 [2024-12-06 17:20:18.855334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.586 [2024-12-06 17:20:18.855382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.586 [2024-12-06 17:20:18.855397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.586 [2024-12-06 17:20:18.870192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.586 [2024-12-06 17:20:18.870312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.586 [2024-12-06 17:20:18.870330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.845 [2024-12-06 17:20:18.886802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.845 [2024-12-06 17:20:18.886869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.845 [2024-12-06 17:20:18.886885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.845 [2024-12-06 17:20:18.902130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:18.902180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:18.902195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:18.916453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:18.916490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:18.916505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:18.932180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:18.932249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:18.932279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:18.948019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:18.948065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:18.948083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:18.963010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:18.963050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:18.963064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:18.977993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:18.978033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:18.978048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:18.993413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:18.993452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:18.993467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.008965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.009005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.009020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.023949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.023987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.024002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.039122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.039163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.039178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.053818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.053856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.053871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.067686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.067725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.067739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.083222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.083259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.083287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.098669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.098708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.098722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.112838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.112877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.112892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:36.846 [2024-12-06 17:20:19.127526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:36.846 [2024-12-06 17:20:19.127564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:36.846 [2024-12-06 17:20:19.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.105 [2024-12-06 17:20:19.142946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.105 [2024-12-06 17:20:19.142988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.105 [2024-12-06 17:20:19.143004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.105 [2024-12-06 17:20:19.157351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.105 [2024-12-06 17:20:19.157392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.105 [2024-12-06 17:20:19.157406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.171702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.171742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.171756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.184352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.184391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.184406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.199440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.199512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.199527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.214855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.214894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.214908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.230201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.230240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.230254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.243227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.243279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.243295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.259890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.259932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.259946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.275503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.275549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.275565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.290037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.290075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.290089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.305873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.305911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.305925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.318685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.318720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.318735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.334380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.334421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.334436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.349495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.349534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.349548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.364841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.364881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.364896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.379791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.379836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.379851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.106 [2024-12-06 17:20:19.395810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.106 [2024-12-06 17:20:19.395861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.106 [2024-12-06 17:20:19.395876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.408004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.365 [2024-12-06 17:20:19.408047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.365 [2024-12-06 17:20:19.408061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.422572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.365 [2024-12-06 17:20:19.422614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.365 [2024-12-06 17:20:19.422640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.436687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.365 [2024-12-06 17:20:19.436724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.365 [2024-12-06 17:20:19.436739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.452023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.365 [2024-12-06 17:20:19.452064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.365 [2024-12-06 17:20:19.452079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.466275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.365 [2024-12-06 17:20:19.466311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.365 [2024-12-06 17:20:19.466326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.481429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.365 [2024-12-06 17:20:19.481467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.365 [2024-12-06 17:20:19.481482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.495700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.365 [2024-12-06 17:20:19.495739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.365 [2024-12-06 17:20:19.495753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.509606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.365 [2024-12-06 17:20:19.509644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.365 [2024-12-06 17:20:19.509658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.365 [2024-12-06 17:20:19.524852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.524897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.524911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.539700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.539739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.539753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.551808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.551845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.551859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.568220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.568258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.568291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.584285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.584324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.584338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.598833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.598870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.598884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.611967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.612005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.612020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.625596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.625637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.625651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.641439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.641489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.641505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.366 [2024-12-06 17:20:19.657303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.366 [2024-12-06 17:20:19.657344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.366 [2024-12-06 17:20:19.657359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.624 [2024-12-06 17:20:19.670495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.624 [2024-12-06 17:20:19.670536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.624 [2024-12-06 17:20:19.670550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.624 [2024-12-06 17:20:19.686773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.624 [2024-12-06 17:20:19.686818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.624 [2024-12-06 17:20:19.686833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.624 [2024-12-06 17:20:19.702155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.624 [2024-12-06 17:20:19.702196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.624 [2024-12-06 17:20:19.702211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.624 [2024-12-06 17:20:19.714615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.624 [2024-12-06 17:20:19.714661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.624 [2024-12-06 17:20:19.714675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.624 17178.50 IOPS, 67.10 MiB/s [2024-12-06T17:20:19.922Z] [2024-12-06 17:20:19.730216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ca92d0) 00:20:37.624 [2024-12-06 17:20:19.730252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:37.624 [2024-12-06 17:20:19.730279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:37.624 00:20:37.624 Latency(us) 00:20:37.624 [2024-12-06T17:20:19.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.624 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:20:37.624 nvme0n1 : 2.00 17203.24 67.20 0.00 0.00 7432.05 3991.74 20375.74 00:20:37.624 [2024-12-06T17:20:19.922Z] =================================================================================================================== 00:20:37.624 [2024-12-06T17:20:19.922Z] Total : 17203.24 67.20 0.00 0.00 7432.05 3991.74 20375.74 00:20:37.624 { 00:20:37.624 "results": [ 00:20:37.624 { 00:20:37.624 "job": "nvme0n1", 00:20:37.624 "core_mask": "0x2", 00:20:37.624 "workload": "randread", 00:20:37.624 "status": "finished", 00:20:37.624 "queue_depth": 128, 00:20:37.624 "io_size": 4096, 00:20:37.624 "runtime": 2.004564, 00:20:37.624 "iops": 17203.24220129664, 00:20:37.624 "mibps": 67.200164848815, 00:20:37.624 "io_failed": 0, 00:20:37.624 "io_timeout": 0, 00:20:37.624 "avg_latency_us": 7432.053814596597, 00:20:37.624 "min_latency_us": 3991.7381818181816, 00:20:37.624 "max_latency_us": 20375.738181818182 00:20:37.624 } 00:20:37.624 ], 00:20:37.624 "core_count": 1 00:20:37.624 } 00:20:37.624 17:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:37.624 17:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:37.625 17:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:37.625 17:20:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:37.625 | .driver_specific 00:20:37.625 | .nvme_error 00:20:37.625 | .status_code 00:20:37.625 | .command_transient_transport_error' 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 135 > 0 )) 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94137 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94137 ']' 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94137 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94137 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:37.884 killing process with pid 94137 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94137' 00:20:37.884 Received shutdown signal, test time was about 2.000000 seconds 00:20:37.884 00:20:37.884 Latency(us) 00:20:37.884 [2024-12-06T17:20:20.182Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.884 [2024-12-06T17:20:20.182Z] =================================================================================================================== 00:20:37.884 [2024-12-06T17:20:20.182Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94137 00:20:37.884 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94137 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94214 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94214 /var/tmp/bperf.sock 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94214 ']' 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.143 17:20:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:38.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:38.143 Zero copy mechanism will not be used. 00:20:38.143 [2024-12-06 17:20:20.343707] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:38.143 [2024-12-06 17:20:20.343818] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94214 ] 00:20:38.402 [2024-12-06 17:20:20.488964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.402 [2024-12-06 17:20:20.525245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.338 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.338 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:39.338 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:39.338 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:39.597 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:39.597 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.597 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:39.597 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.597 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.597 17:20:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:39.856 nvme0n1 00:20:39.856 17:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:39.856 17:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.856 17:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:39.856 17:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.856 17:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:39.856 17:20:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:40.117 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:40.117 Zero copy mechanism will not be used. 00:20:40.117 Running I/O for 2 seconds... 00:20:40.117 [2024-12-06 17:20:22.191171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.191223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.191238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.194470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.194512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.194526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.199390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.199427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.199441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.204281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.204317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.204330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.208877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.208915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.208929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.211797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.211834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.211847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.216893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.216931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.216945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.220423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.220459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.220471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.224426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.224464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.224477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.228944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.228981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.228995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.232471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.232508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.232521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.236556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.236592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.236606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.241601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.241639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.241652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.245138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.245175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.245188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.249137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.249175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.249188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.253124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.253162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.253175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.256896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.256932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.256945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.261282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.261317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.117 [2024-12-06 17:20:22.261331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.117 [2024-12-06 17:20:22.265075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.117 [2024-12-06 17:20:22.265112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.265125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.268706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.268743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.268756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.272975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.273011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.273025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.276887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.276923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.276936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.280822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.280858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.280871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.284485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.284521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.284533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.288434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.288470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.288483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.292432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.292469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.292482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.295851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.295886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.295900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.299371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.299405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.299419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.303345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.303380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.303393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.307379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.307415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.307427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.311341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.311377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.311390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.314694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.314735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.314748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.318896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.318933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.318947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.322806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.322841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.322854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.327006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.327042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.327056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.330434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.330470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.330483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.334110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.334147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.334160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.338016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.338052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.338066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.341845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.341884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.341897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.346030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.346066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.346079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.349781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.349817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.349830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.354084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.354121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.354135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.357584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.357622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.357635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.361765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.361803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.361816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.366461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.366499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.118 [2024-12-06 17:20:22.366512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.118 [2024-12-06 17:20:22.369755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.118 [2024-12-06 17:20:22.369790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.369804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.374238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.374287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.374300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.378024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.378061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.378073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.381423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.381459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.381472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.385401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.385437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.385450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.389571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.389610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.389623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.393020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.393057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.393070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.396652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.396690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.396703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.400683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.400720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.400733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.404303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.404338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.404351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.119 [2024-12-06 17:20:22.408031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.119 [2024-12-06 17:20:22.408068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.119 [2024-12-06 17:20:22.408081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.412308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.412344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.412358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.415874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.415911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.415924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.419723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.419759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.419773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.423600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.423636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.423650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.427711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.427747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.427761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.431243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.431291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.431305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.435684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.435720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.435734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.440826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.440866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.440879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.445549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.445585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.445599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.450157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.450194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.450207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.453636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.453672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.453685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.457977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.458014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.458027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.380 [2024-12-06 17:20:22.462881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.380 [2024-12-06 17:20:22.462919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.380 [2024-12-06 17:20:22.462932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.467205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.467241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.467254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.470249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.470295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.470309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.475258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.475310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.475323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.480195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.480232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.480246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.485007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.485043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.485057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.487879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.487915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.487928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.492205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.492241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.492254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.496489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.496527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.496540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.499648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.499684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.499698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.503790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.503826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.503839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.507778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.507815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.507828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.511914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.511950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.511963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.515632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.515669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.515682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.519600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.519636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.519649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.523561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.523596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.523610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.527013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.527049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.527062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.530550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.530586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.530599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.534840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.534876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.534889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.539467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.539502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.539515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.542793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.542829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.542842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.546979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.547016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.547029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.550952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.550989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.551002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.554453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.554490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.554503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.558713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.558749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.558762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.563363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.563399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.563412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.566583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.566619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.381 [2024-12-06 17:20:22.566642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.381 [2024-12-06 17:20:22.570624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.381 [2024-12-06 17:20:22.570669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.570683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.574793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.574830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.574844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.577725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.577761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.577775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.581562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.581597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.581611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.584683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.584718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.584731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.588468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.588503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.588517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.592185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.592221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.592234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.597157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.597193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.597206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.600584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.600619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.600632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.604893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.604929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.604943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.609404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.609440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.609453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.613855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.613892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.613906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.617295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.617331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.617344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.622040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.622077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.622090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.626536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.626572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.626586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.629717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.629753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.629766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.634300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.634335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.634349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.639291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.639323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.639337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.644212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.644248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.644273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.647547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.647583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.647597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.651604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.651640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.651653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.656213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.656251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.656279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.659552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.659587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.659600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.663257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.663305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.663318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.667020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.667055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.667068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.382 [2024-12-06 17:20:22.671970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.382 [2024-12-06 17:20:22.672008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.382 [2024-12-06 17:20:22.672021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.675351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.675386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.675400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.679661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.679697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.679710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.684625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.684661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.684674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.688954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.688990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.689004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.692005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.692041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.692055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.696627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.696664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.696678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.701460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.701495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.701509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.705890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.705925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.705939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.709126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.709163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.709176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.714023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.714061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.714075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.719070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.719108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.719121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.722620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.722664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.722677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.727045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.727082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.727096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.732300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.732336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.732349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.737457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.737493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.737507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.741120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.741157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.741170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.745484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.745521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.745534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.748974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.749013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.749027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.753448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.753484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.753497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.757924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.757961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.757974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.761105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.761141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.761154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.765702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.765738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.765751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.769911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.769947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.769961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.773371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.773407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.773419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.776576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.776612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.776625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.644 [2024-12-06 17:20:22.780884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.644 [2024-12-06 17:20:22.780920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.644 [2024-12-06 17:20:22.780941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.785425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.785461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.785474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.788570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.788605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.788618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.793303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.793338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.793351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.797998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.798034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.798049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.801205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.801240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.801253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.805583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.805620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.805633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.810541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.810577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.810590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.813796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.813843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.813856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.817891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.817928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.817941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.822677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.822714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.822727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.827349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.827385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.827398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.830072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.830107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.830120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.835130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.835166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.835180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.839899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.839937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.839950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.842979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.843014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.843027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.847838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.847875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.847888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.852973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.853010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.853023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.858114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.858152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.858166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.861590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.861626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.861639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.865585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.865622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.865635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.869224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.869261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.869288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.873084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.873121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.873134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.876948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.876984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.876998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.881220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.881257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.884371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.884408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.884422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.888505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.645 [2024-12-06 17:20:22.888541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.645 [2024-12-06 17:20:22.888554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.645 [2024-12-06 17:20:22.893328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.893365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.893378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.896740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.896776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.896789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.901073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.901111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.901124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.904773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.904810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.904823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.908878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.908913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.908926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.912554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.912591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.912604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.917044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.917080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.917093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.920764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.920804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.920818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.924882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.924919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.924932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.929001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.929037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.929051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.933600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.933636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.933650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.646 [2024-12-06 17:20:22.936763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.646 [2024-12-06 17:20:22.936800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.646 [2024-12-06 17:20:22.936813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.940747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.940783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.940796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.945333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.945372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.945385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.950166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.950202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.950216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.952962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.952997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.953010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.957505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.957541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.957554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.960665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.960702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.960715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.964855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.964891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.964905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.969761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.969797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.969811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.973293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.973329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.973342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.977667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.977703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.977716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.982799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.982838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.982851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.987656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.987694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.987717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.992376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.992412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.992425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:22.995801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:22.995836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:22.995849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.000048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.000085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.000098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.004347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.004383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.004396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.008137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.008174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.008187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.011959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.011995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.012008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.015794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.015834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.015848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.019441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.019476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.019489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.024463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.024499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.024512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.029155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.029193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.029206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.032402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.908 [2024-12-06 17:20:23.032438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.908 [2024-12-06 17:20:23.032451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.908 [2024-12-06 17:20:23.037235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.037292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.037308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.042112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.042151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.042164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.045435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.045471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.045485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.050247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.050297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.050312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.055284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.055318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.055331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.059946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.059986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.060000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.062836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.062871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.062884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.067908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.067948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.067962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.072696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.072735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.075577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.075617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.075630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.080232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.080291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.080307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.083989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.084026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.084040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.087600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.087638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.087651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.091926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.091963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.091976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.095108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.095143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.095157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.099576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.099612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.099625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.104008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.104046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.104059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.107358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.107397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.107410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.111695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.111732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.111745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.116734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.116772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.116785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.119858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.119896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.119909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.124340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.124375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.124388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.129074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.129110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.129123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.133122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.133172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.133185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.136760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.136796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.136809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.140678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.140713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.140726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.144339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.909 [2024-12-06 17:20:23.144375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.909 [2024-12-06 17:20:23.144388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.909 [2024-12-06 17:20:23.148328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.148364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.148378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.151907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.151943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.151956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.156221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.156257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.156288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.160615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.160652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.160666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.163782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.163818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.163832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.168721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.168758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.168771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.173386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.173434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.173448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.176652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.176700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.176714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.181496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.181549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.181564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:40.910 7544.00 IOPS, 943.00 MiB/s [2024-12-06T17:20:23.208Z] [2024-12-06 17:20:23.187864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.187917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.187932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.191147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.191183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.191196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.195633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.195688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.195703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:40.910 [2024-12-06 17:20:23.199876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:40.910 [2024-12-06 17:20:23.199932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:40.910 [2024-12-06 17:20:23.199946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.171 [2024-12-06 17:20:23.203757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.171 [2024-12-06 17:20:23.203807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.171 [2024-12-06 17:20:23.203822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.171 [2024-12-06 17:20:23.207711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.171 [2024-12-06 17:20:23.207762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.171 [2024-12-06 17:20:23.207777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.171 [2024-12-06 17:20:23.212238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.171 [2024-12-06 17:20:23.212304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.171 [2024-12-06 17:20:23.212319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.171 [2024-12-06 17:20:23.215601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.171 [2024-12-06 17:20:23.215645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.171 [2024-12-06 17:20:23.215659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.171 [2024-12-06 17:20:23.219863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.171 [2024-12-06 17:20:23.219921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.219935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.224651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.224709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.224724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.229425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.229477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.229491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.232374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.232411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.232424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.237031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.237090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.237104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.241360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.241414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.241429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.244957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.245000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.245015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.249504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.249554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.249569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.253128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.253177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.253190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.257586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.257631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.257645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.261062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.261098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.261112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.265620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.265662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.265676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.269651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.269692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.269705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.273436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.273488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.273502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.277471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.277523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.277537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.281072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.281114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.281128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.285469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.285512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.285527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.289191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.289228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.289242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.293250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.293298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.293312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.296923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.296960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.296973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.301299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.301352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.301366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.305586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.305635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.305649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.308867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.308920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.308933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.313396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.313438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.313451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.318404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.318441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.318455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.321820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.321856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.321868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.326120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.172 [2024-12-06 17:20:23.326157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.172 [2024-12-06 17:20:23.326170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.172 [2024-12-06 17:20:23.331009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.331045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.331059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.334411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.334445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.334458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.338498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.338534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.338547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.342772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.342808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.342821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.346123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.346159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.346172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.350216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.350253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.350278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.355105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.355140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.355153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.359609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.359647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.359661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.362687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.362722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.362735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.366954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.366991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.367004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.371512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.371549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.371562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.374661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.374696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.374709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.379845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.379883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.379896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.384664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.384705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.384719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.388013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.388048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.388061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.391652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.391690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.391703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.396448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.396485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.396498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.400053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.400096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.400109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.403747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.403784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.403798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.407791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.407828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.407841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.411538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.411578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.411592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.415350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.415389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.415402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.419446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.419482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.419495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.423327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.423374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.423388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.426994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.427031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.427044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.430242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.430289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.430303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.434944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.173 [2024-12-06 17:20:23.434982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.173 [2024-12-06 17:20:23.434995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.173 [2024-12-06 17:20:23.439452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.174 [2024-12-06 17:20:23.439489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.174 [2024-12-06 17:20:23.439502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.174 [2024-12-06 17:20:23.442381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.174 [2024-12-06 17:20:23.442415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.174 [2024-12-06 17:20:23.442427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.174 [2024-12-06 17:20:23.447404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.174 [2024-12-06 17:20:23.447441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.174 [2024-12-06 17:20:23.447454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.174 [2024-12-06 17:20:23.451822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.174 [2024-12-06 17:20:23.451859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.174 [2024-12-06 17:20:23.451873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.174 [2024-12-06 17:20:23.454928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.174 [2024-12-06 17:20:23.454965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.174 [2024-12-06 17:20:23.454978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.174 [2024-12-06 17:20:23.459918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.174 [2024-12-06 17:20:23.459955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.174 [2024-12-06 17:20:23.459969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.174 [2024-12-06 17:20:23.464911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.174 [2024-12-06 17:20:23.464949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.174 [2024-12-06 17:20:23.464962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.434 [2024-12-06 17:20:23.469445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.434 [2024-12-06 17:20:23.469482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.434 [2024-12-06 17:20:23.469495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.434 [2024-12-06 17:20:23.472759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.434 [2024-12-06 17:20:23.472796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.434 [2024-12-06 17:20:23.472809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.434 [2024-12-06 17:20:23.476652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.434 [2024-12-06 17:20:23.476689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.434 [2024-12-06 17:20:23.476702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.434 [2024-12-06 17:20:23.480342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.434 [2024-12-06 17:20:23.480378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.434 [2024-12-06 17:20:23.480391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.434 [2024-12-06 17:20:23.484438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.434 [2024-12-06 17:20:23.484484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.434 [2024-12-06 17:20:23.484498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.434 [2024-12-06 17:20:23.487947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.434 [2024-12-06 17:20:23.487983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.434 [2024-12-06 17:20:23.487996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.492194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.492231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.492244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.495979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.496017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.496030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.500110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.500146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.500159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.503712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.503748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.503761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.507800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.507836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.507849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.512488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.512524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.512537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.515914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.515951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.515974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.519348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.519384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.519397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.523089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.523125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.523138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.526718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.526753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.526766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.530531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.530568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.530581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.533979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.534016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.534029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.537708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.537745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.537758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.541945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.541981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.541994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.545682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.545718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.545731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.549638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.549675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.549689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.553935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.553972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.553985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.557838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.557874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.557888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.561400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.561436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.561448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.565023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.565060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.565072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.568949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.568986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.568999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.572962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.572998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.573011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.576887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.576923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.576936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.580441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.580478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.580491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.435 [2024-12-06 17:20:23.584335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.435 [2024-12-06 17:20:23.584371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.435 [2024-12-06 17:20:23.584384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.588451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.588488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.588500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.591927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.591963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.591976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.596558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.596595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.596608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.601005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.601043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.601056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.604321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.604357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.604370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.608668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.608705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.608719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.612550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.612587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.612601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.616369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.616406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.616419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.620655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.620691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.620705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.624188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.624224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.624237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.628297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.628333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.628346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.631996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.632032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.632045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.635933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.635970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.635995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.639713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.639749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.639762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.643814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.643850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.643864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.647924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.647960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.647973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.652186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.652223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.652236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.655491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.655528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.655541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.660346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.660382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.660395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.663909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.663957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.663970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.667061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.667097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.667110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.671658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.671696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.671709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.676097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.676134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.676147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.679215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.679250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.679278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.683359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.683395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.683408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.688384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.688419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.688433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.436 [2024-12-06 17:20:23.693113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.436 [2024-12-06 17:20:23.693151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.436 [2024-12-06 17:20:23.693164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.437 [2024-12-06 17:20:23.695994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.437 [2024-12-06 17:20:23.696030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.437 [2024-12-06 17:20:23.696043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.437 [2024-12-06 17:20:23.701150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.437 [2024-12-06 17:20:23.701188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.437 [2024-12-06 17:20:23.701201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.437 [2024-12-06 17:20:23.705962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.437 [2024-12-06 17:20:23.705998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.437 [2024-12-06 17:20:23.706012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.437 [2024-12-06 17:20:23.708831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.437 [2024-12-06 17:20:23.708874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.437 [2024-12-06 17:20:23.708887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.437 [2024-12-06 17:20:23.713507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.437 [2024-12-06 17:20:23.713545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.437 [2024-12-06 17:20:23.713558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.437 [2024-12-06 17:20:23.718317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.437 [2024-12-06 17:20:23.718354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.437 [2024-12-06 17:20:23.718367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.437 [2024-12-06 17:20:23.721178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.437 [2024-12-06 17:20:23.721215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.437 [2024-12-06 17:20:23.721228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.437 [2024-12-06 17:20:23.726052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.437 [2024-12-06 17:20:23.726089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.437 [2024-12-06 17:20:23.726103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.730541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.730577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.730591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.733641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.733677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.733690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.738613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.738660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.738674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.742005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.742042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.742056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.746198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.746234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.746247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.750464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.750501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.750515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.754119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.754156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.754169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.757509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.757546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.757566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.761917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.761955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.761969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.766372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.766408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.766420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.770720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.770756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.770771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.773708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.773744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.773757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.777738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.777775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.777788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.781525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.781574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.781588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.784862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.784909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.784922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.789174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.789210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.789224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.793336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.793371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.793384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.797077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.797114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.797127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.801356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.801393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.801406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.805776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.805812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.805825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.809664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.809701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.698 [2024-12-06 17:20:23.809714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.698 [2024-12-06 17:20:23.813719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.698 [2024-12-06 17:20:23.813757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.813770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.817308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.817343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.817357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.821385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.821421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.821435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.824698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.824735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.824748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.828727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.828764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.828777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.832464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.832500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.832514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.835779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.835818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.835831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.840171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.840208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.840221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.843805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.843842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.843856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.848821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.848871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.848885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.852257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.852304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.852317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.856638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.856675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.856689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.861465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.861501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.861514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.864719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.864755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.864767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.868900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.868939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.868953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.873532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.873569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.878299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.878334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.878348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.881456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.881491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.881505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.886579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.886616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.886638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.891379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.891429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.894786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.894821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.894834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.899251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.899301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.899315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.903786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.903823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.903837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.907040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.907077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.907089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.911213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.911250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.699 [2024-12-06 17:20:23.911278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.699 [2024-12-06 17:20:23.915210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.699 [2024-12-06 17:20:23.915246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.915258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.919468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.919506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.919519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.923172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.923208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.923222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.927101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.927137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.927150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.931220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.931257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.931288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.934353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.934388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.934401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.939225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.939276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.939292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.943728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.943765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.943779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.946651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.946685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.946698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.951282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.951329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.951342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.956308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.956350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.956364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.959821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.959858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.959871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.964096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.964133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.964146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.969111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.969148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.969161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.973915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.973953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.973966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.977033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.977068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.977081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.982096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.982133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.982146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.987062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.987099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.987112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.700 [2024-12-06 17:20:23.992196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.700 [2024-12-06 17:20:23.992232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.700 [2024-12-06 17:20:23.992246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:23.995651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:23.995686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:23.995699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.000118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.000155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.000167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.004514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.004552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.004566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.007685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.007721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.007733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.012680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.012717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.012730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.017717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.017754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.017767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.021238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.021290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.021304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.025639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.025676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.025690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.030516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.030553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.030567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.035398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.961 [2024-12-06 17:20:24.035434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.961 [2024-12-06 17:20:24.035447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.961 [2024-12-06 17:20:24.038147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.038181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.038195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.043208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.043245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.043259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.046647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.046682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.046695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.050605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.050649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.050663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.055064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.055101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.055114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.058328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.058364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.058377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.062608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.062652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.062665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.066280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.066314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.066326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.070390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.070426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.070438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.074098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.074134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.074147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.077811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.077847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.077861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.081209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.081247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.081260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.085559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.085596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.085610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.089741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.089777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.089790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.093060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.093095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.093109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.097370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.097406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.097419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.101846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.101882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.101896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.104861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.104896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.104910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.109987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.110024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.110037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.114431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.114467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.114480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.117578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.117612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.117625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.121905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.121942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.121955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.126666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.126703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.126716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.129746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.129781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.129794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.962 [2024-12-06 17:20:24.133976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.962 [2024-12-06 17:20:24.134013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.962 [2024-12-06 17:20:24.134026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.138952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.138988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.139002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.142354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.142388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.142401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.146711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.146748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.146762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.150800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.150835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.150848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.153811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.153847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.153860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.158396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.158432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.158445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.162964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.163001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.163015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.166518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.166556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.166570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.171146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.171188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.171201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.175951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.176000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.176014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:41.963 [2024-12-06 17:20:24.180514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e9add0) 00:20:41.963 [2024-12-06 17:20:24.180553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:41.963 [2024-12-06 17:20:24.180567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.963 7587.00 IOPS, 948.38 MiB/s 00:20:41.963 Latency(us) 00:20:41.963 [2024-12-06T17:20:24.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.963 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:20:41.963 nvme0n1 : 2.00 7585.86 948.23 0.00 0.00 2105.25 644.19 11319.85 00:20:41.963 [2024-12-06T17:20:24.261Z] =================================================================================================================== 00:20:41.963 [2024-12-06T17:20:24.261Z] Total : 7585.86 948.23 0.00 0.00 2105.25 644.19 11319.85 00:20:41.963 { 00:20:41.963 "results": [ 00:20:41.963 { 00:20:41.963 "job": "nvme0n1", 00:20:41.963 "core_mask": "0x2", 00:20:41.963 "workload": "randread", 00:20:41.963 "status": "finished", 00:20:41.963 "queue_depth": 16, 00:20:41.963 "io_size": 131072, 00:20:41.963 "runtime": 2.002409, 00:20:41.963 "iops": 7585.862828223405, 00:20:41.963 "mibps": 948.2328535279256, 00:20:41.963 "io_failed": 0, 00:20:41.963 "io_timeout": 0, 00:20:41.963 "avg_latency_us": 2105.248641570411, 00:20:41.963 "min_latency_us": 644.189090909091, 00:20:41.963 "max_latency_us": 11319.854545454546 00:20:41.963 } 00:20:41.963 ], 00:20:41.963 "core_count": 1 00:20:41.963 } 00:20:41.963 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:41.963 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:41.963 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:41.963 | .driver_specific 00:20:41.963 | .nvme_error 00:20:41.963 | .status_code 00:20:41.963 | .command_transient_transport_error' 00:20:41.963 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 490 > 0 )) 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94214 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94214 ']' 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94214 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94214 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:42.531 killing process with pid 94214 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94214' 00:20:42.531 Received shutdown signal, test time was about 2.000000 seconds 00:20:42.531 00:20:42.531 Latency(us) 00:20:42.531 [2024-12-06T17:20:24.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.531 [2024-12-06T17:20:24.829Z] =================================================================================================================== 00:20:42.531 [2024-12-06T17:20:24.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94214 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94214 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94303 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94303 /var/tmp/bperf.sock 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94303 ']' 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.531 17:20:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:42.531 [2024-12-06 17:20:24.755690] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:42.531 [2024-12-06 17:20:24.755783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94303 ] 00:20:42.790 [2024-12-06 17:20:24.907895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.790 [2024-12-06 17:20:24.946375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.790 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.790 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:42.790 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:42.790 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:43.355 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:43.355 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.355 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.355 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.355 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.355 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:43.612 nvme0n1 00:20:43.612 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:20:43.612 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.612 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:43.612 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.612 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:43.612 17:20:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:43.612 Running I/O for 2 seconds... 00:20:43.612 [2024-12-06 17:20:25.905992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eee190 00:20:43.612 [2024-12-06 17:20:25.907701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.612 [2024-12-06 17:20:25.907750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:25.919169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee0630 00:20:43.870 [2024-12-06 17:20:25.920060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:25.920096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:25.932501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee84c0 00:20:43.870 [2024-12-06 17:20:25.933435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:25.933468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:25.950029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee49b0 00:20:43.870 [2024-12-06 17:20:25.951771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:25.951803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:25.961791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee5220 00:20:43.870 [2024-12-06 17:20:25.962707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:25.962739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:25.976147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eea680 00:20:43.870 [2024-12-06 17:20:25.977015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:25.977047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:25.990506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edf988 00:20:43.870 [2024-12-06 17:20:25.991424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:25.991458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.005421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef5be8 00:20:43.870 [2024-12-06 17:20:26.006732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.006768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.020635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edece0 00:20:43.870 [2024-12-06 17:20:26.021926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.021959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.034114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef6cc8 00:20:43.870 [2024-12-06 17:20:26.035405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.035436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.049516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef6cc8 00:20:43.870 [2024-12-06 17:20:26.050770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.050803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.065939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef6cc8 00:20:43.870 [2024-12-06 17:20:26.067776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:14679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.067807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.074671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef1ca0 00:20:43.870 [2024-12-06 17:20:26.075403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.075435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.089136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef2d80 00:20:43.870 [2024-12-06 17:20:26.090552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.090583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.100351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef1430 00:20:43.870 [2024-12-06 17:20:26.101479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.101515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.112079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee73e0 00:20:43.870 [2024-12-06 17:20:26.113183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.870 [2024-12-06 17:20:26.113215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:43.870 [2024-12-06 17:20:26.124247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef46d0 00:20:43.871 [2024-12-06 17:20:26.124865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.871 [2024-12-06 17:20:26.124898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:43.871 [2024-12-06 17:20:26.139138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef0ff8 00:20:43.871 [2024-12-06 17:20:26.141096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.871 [2024-12-06 17:20:26.141128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:43.871 [2024-12-06 17:20:26.147701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef4f40 00:20:43.871 [2024-12-06 17:20:26.148517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.871 [2024-12-06 17:20:26.148549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:43.871 [2024-12-06 17:20:26.161527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eec840 00:20:43.871 [2024-12-06 17:20:26.162537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:43.871 [2024-12-06 17:20:26.162570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.172955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee3498 00:20:44.129 [2024-12-06 17:20:26.173763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.173797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.184461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efac10 00:20:44.129 [2024-12-06 17:20:26.185140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.185174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.198244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee5ec8 00:20:44.129 [2024-12-06 17:20:26.199759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.199792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.209357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef0ff8 00:20:44.129 [2024-12-06 17:20:26.210738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.210775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.221137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef3e60 00:20:44.129 [2024-12-06 17:20:26.222465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.222497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.235552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eeb328 00:20:44.129 [2024-12-06 17:20:26.237548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.237580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.244083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee1b48 00:20:44.129 [2024-12-06 17:20:26.245099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.245130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.258509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efac10 00:20:44.129 [2024-12-06 17:20:26.260050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.260084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.269932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ede470 00:20:44.129 [2024-12-06 17:20:26.271313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.271344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.281412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efeb58 00:20:44.129 [2024-12-06 17:20:26.282620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.282663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.292831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef8a50 00:20:44.129 [2024-12-06 17:20:26.293895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.293927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.304335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef0788 00:20:44.129 [2024-12-06 17:20:26.305213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.305243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.315788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efb480 00:20:44.129 [2024-12-06 17:20:26.316512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.316547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.330991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efc998 00:20:44.129 [2024-12-06 17:20:26.332729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.332763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.342538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee88f8 00:20:44.129 [2024-12-06 17:20:26.344136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.344169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.354012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efe720 00:20:44.129 [2024-12-06 17:20:26.355432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.355463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.365534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee95a0 00:20:44.129 [2024-12-06 17:20:26.366794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.366827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.377334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efda78 00:20:44.129 [2024-12-06 17:20:26.378596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.378636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.389613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efeb58 00:20:44.129 [2024-12-06 17:20:26.390871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.390903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.401089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efcdd0 00:20:44.129 [2024-12-06 17:20:26.402199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.402231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.415287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eeee38 00:20:44.129 [2024-12-06 17:20:26.417027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.417063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:44.129 [2024-12-06 17:20:26.423900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee12d8 00:20:44.129 [2024-12-06 17:20:26.424667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.129 [2024-12-06 17:20:26.424699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.438388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eea680 00:20:44.388 [2024-12-06 17:20:26.439676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.439708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.449828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee1f80 00:20:44.388 [2024-12-06 17:20:26.450942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.450983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.461303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7538 00:20:44.388 [2024-12-06 17:20:26.462247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.462290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.472732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eed4e8 00:20:44.388 [2024-12-06 17:20:26.473538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.473571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.487879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efd640 00:20:44.388 [2024-12-06 17:20:26.489676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.489708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.499463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee6300 00:20:44.388 [2024-12-06 17:20:26.501097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.501131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.508296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eef270 00:20:44.388 [2024-12-06 17:20:26.509077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.509108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.522830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efb480 00:20:44.388 [2024-12-06 17:20:26.524137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.524169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.535427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee27f0 00:20:44.388 [2024-12-06 17:20:26.537064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.537096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.546687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef4b08 00:20:44.388 [2024-12-06 17:20:26.547999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.548032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.558439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee3d08 00:20:44.388 [2024-12-06 17:20:26.559801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.559833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.572924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eeb760 00:20:44.388 [2024-12-06 17:20:26.574958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.574990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.581593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eddc00 00:20:44.388 [2024-12-06 17:20:26.582719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.582762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.596232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee6300 00:20:44.388 [2024-12-06 17:20:26.598014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.598046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.604925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eea680 00:20:44.388 [2024-12-06 17:20:26.605761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.605793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.617339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eee5c8 00:20:44.388 [2024-12-06 17:20:26.618145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.618176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.631836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef1868 00:20:44.388 [2024-12-06 17:20:26.632854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.632887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.644002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eee5c8 00:20:44.388 [2024-12-06 17:20:26.645341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.645372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.655596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee0ea0 00:20:44.388 [2024-12-06 17:20:26.656763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.656795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.670115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef9b30 00:20:44.388 [2024-12-06 17:20:26.672123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.672154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:44.388 [2024-12-06 17:20:26.678825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edece0 00:20:44.388 [2024-12-06 17:20:26.679854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.388 [2024-12-06 17:20:26.679888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.693420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efeb58 00:20:44.647 [2024-12-06 17:20:26.695140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.695174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.704346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efdeb0 00:20:44.647 [2024-12-06 17:20:26.705208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.705240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.716729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef92c0 00:20:44.647 [2024-12-06 17:20:26.718091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.718124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.731366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efe720 00:20:44.647 [2024-12-06 17:20:26.733424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.733457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.740066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef4298 00:20:44.647 [2024-12-06 17:20:26.740966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.740998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.755463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef2510 00:20:44.647 [2024-12-06 17:20:26.757371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.757403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.767163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee6300 00:20:44.647 [2024-12-06 17:20:26.768930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.768962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.777463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee4578 00:20:44.647 [2024-12-06 17:20:26.778716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.778748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.792096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efc128 00:20:44.647 [2024-12-06 17:20:26.794022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.794053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.800860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7da8 00:20:44.647 [2024-12-06 17:20:26.801789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.801820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.815504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef46d0 00:20:44.647 [2024-12-06 17:20:26.817119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.817152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.828354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef5378 00:20:44.647 [2024-12-06 17:20:26.830141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.830175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.840886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eff3c8 00:20:44.647 [2024-12-06 17:20:26.842677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.647 [2024-12-06 17:20:26.842713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:44.647 [2024-12-06 17:20:26.852649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eef270 00:20:44.648 [2024-12-06 17:20:26.854283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.648 [2024-12-06 17:20:26.854316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:44.648 [2024-12-06 17:20:26.863790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef6458 00:20:44.648 [2024-12-06 17:20:26.864982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.648 [2024-12-06 17:20:26.865014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:44.648 [2024-12-06 17:20:26.875756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edfdc0 00:20:44.648 [2024-12-06 17:20:26.877046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.648 [2024-12-06 17:20:26.877078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:44.648 [2024-12-06 17:20:26.890389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef0ff8 00:20:44.648 20356.00 IOPS, 79.52 MiB/s [2024-12-06T17:20:26.946Z] [2024-12-06 17:20:26.892430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.648 [2024-12-06 17:20:26.892460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:44.648 [2024-12-06 17:20:26.899148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7970 00:20:44.648 [2024-12-06 17:20:26.900129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.648 [2024-12-06 17:20:26.900160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:44.648 [2024-12-06 17:20:26.913746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eec408 00:20:44.648 [2024-12-06 17:20:26.915248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.648 [2024-12-06 17:20:26.915291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:44.648 [2024-12-06 17:20:26.925356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef5be8 00:20:44.648 [2024-12-06 17:20:26.926693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.648 [2024-12-06 17:20:26.926726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:44.648 [2024-12-06 17:20:26.937005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef6890 00:20:44.648 [2024-12-06 17:20:26.938187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.648 [2024-12-06 17:20:26.938222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:26.950523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efbcf0 00:20:44.907 [2024-12-06 17:20:26.952191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.907 [2024-12-06 17:20:26.952224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:26.962146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eebfd0 00:20:44.907 [2024-12-06 17:20:26.963684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.907 [2024-12-06 17:20:26.963716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:26.973814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efc560 00:20:44.907 [2024-12-06 17:20:26.975166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.907 [2024-12-06 17:20:26.975197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:26.985447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee5220 00:20:44.907 [2024-12-06 17:20:26.986645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.907 [2024-12-06 17:20:26.986677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:26.997079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eeaef0 00:20:44.907 [2024-12-06 17:20:26.998125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.907 [2024-12-06 17:20:26.998158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:27.009884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee6738 00:20:44.907 [2024-12-06 17:20:27.011014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.907 [2024-12-06 17:20:27.011047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:27.024022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edf550 00:20:44.907 [2024-12-06 17:20:27.025565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.907 [2024-12-06 17:20:27.025597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:27.035699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee5220 00:20:44.907 [2024-12-06 17:20:27.036841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.907 [2024-12-06 17:20:27.036873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:44.907 [2024-12-06 17:20:27.047710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eef270 00:20:44.908 [2024-12-06 17:20:27.048917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.048951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.062512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef4b08 00:20:44.908 [2024-12-06 17:20:27.064406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.064440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.071252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee8088 00:20:44.908 [2024-12-06 17:20:27.072153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.072182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.085970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ede038 00:20:44.908 [2024-12-06 17:20:27.087608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.087638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.097399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee23b8 00:20:44.908 [2024-12-06 17:20:27.098585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.098618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.109506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee6fa8 00:20:44.908 [2024-12-06 17:20:27.110639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.110670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.121142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee3498 00:20:44.908 [2024-12-06 17:20:27.122087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.122120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.132822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef4298 00:20:44.908 [2024-12-06 17:20:27.133623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.133657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.148312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edf550 00:20:44.908 [2024-12-06 17:20:27.150103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.150141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.159965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7da8 00:20:44.908 [2024-12-06 17:20:27.161608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.161639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.171774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eff3c8 00:20:44.908 [2024-12-06 17:20:27.173281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.173314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.182842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee4de8 00:20:44.908 [2024-12-06 17:20:27.183911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.183944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:44.908 [2024-12-06 17:20:27.194907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efb048 00:20:44.908 [2024-12-06 17:20:27.196056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:44.908 [2024-12-06 17:20:27.196087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:45.167 [2024-12-06 17:20:27.207136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee84c0 00:20:45.167 [2024-12-06 17:20:27.207816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.167 [2024-12-06 17:20:27.207849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:45.167 [2024-12-06 17:20:27.221345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee2c28 00:20:45.167 [2024-12-06 17:20:27.222852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.167 [2024-12-06 17:20:27.222885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:45.167 [2024-12-06 17:20:27.232989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edf988 00:20:45.167 [2024-12-06 17:20:27.234338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.167 [2024-12-06 17:20:27.234368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:45.167 [2024-12-06 17:20:27.244743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef8618 00:20:45.167 [2024-12-06 17:20:27.245931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.167 [2024-12-06 17:20:27.245963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:45.167 [2024-12-06 17:20:27.256479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef4298 00:20:45.167 [2024-12-06 17:20:27.257520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.167 [2024-12-06 17:20:27.257554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:45.167 [2024-12-06 17:20:27.271092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee5220 00:20:45.168 [2024-12-06 17:20:27.272925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.272956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.279918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef92c0 00:20:45.168 [2024-12-06 17:20:27.280770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.280801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.294471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efef90 00:20:45.168 [2024-12-06 17:20:27.296013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.296045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.305880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efc128 00:20:45.168 [2024-12-06 17:20:27.307044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.307078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.317835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efa7d8 00:20:45.168 [2024-12-06 17:20:27.319191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.319223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.330173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efb480 00:20:45.168 [2024-12-06 17:20:27.330918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.330952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.345319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef46d0 00:20:45.168 [2024-12-06 17:20:27.347438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.347472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.354060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edf988 00:20:45.168 [2024-12-06 17:20:27.355012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.355057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.367985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efe720 00:20:45.168 [2024-12-06 17:20:27.369110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.369144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.380484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7100 00:20:45.168 [2024-12-06 17:20:27.382339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.382372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.392087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eefae0 00:20:45.168 [2024-12-06 17:20:27.393323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.393355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.404056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee1b48 00:20:45.168 [2024-12-06 17:20:27.405351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.405382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.418716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7100 00:20:45.168 [2024-12-06 17:20:27.420687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.420720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.427461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efd640 00:20:45.168 [2024-12-06 17:20:27.428304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.428336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.442795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efdeb0 00:20:45.168 [2024-12-06 17:20:27.444635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.444667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:45.168 [2024-12-06 17:20:27.454458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef8a50 00:20:45.168 [2024-12-06 17:20:27.456131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.168 [2024-12-06 17:20:27.456163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.466124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eec408 00:20:45.428 [2024-12-06 17:20:27.467675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.467707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.477700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7100 00:20:45.428 [2024-12-06 17:20:27.478979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.479015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.489499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ede8a8 00:20:45.428 [2024-12-06 17:20:27.490795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.490828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.501603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edece0 00:20:45.428 [2024-12-06 17:20:27.502407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.502443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.513112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee7c50 00:20:45.428 [2024-12-06 17:20:27.513790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.513824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.523996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee6fa8 00:20:45.428 [2024-12-06 17:20:27.524806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.524837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.538467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7538 00:20:45.428 [2024-12-06 17:20:27.539960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.539992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.550576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efa7d8 00:20:45.428 [2024-12-06 17:20:27.551589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.551622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.562321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee23b8 00:20:45.428 [2024-12-06 17:20:27.563685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.563726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.574101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eef270 00:20:45.428 [2024-12-06 17:20:27.575470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.575501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.585332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef0ff8 00:20:45.428 [2024-12-06 17:20:27.586383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.586416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.597060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef8a50 00:20:45.428 [2024-12-06 17:20:27.598104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.598136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.609346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef4298 00:20:45.428 [2024-12-06 17:20:27.610394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.610426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.620849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eef270 00:20:45.428 [2024-12-06 17:20:27.621769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.621800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.632409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee7c50 00:20:45.428 [2024-12-06 17:20:27.633154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.633196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.647327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016efbcf0 00:20:45.428 [2024-12-06 17:20:27.648911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.648947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.658547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef7970 00:20:45.428 [2024-12-06 17:20:27.659857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.659892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.670357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef2948 00:20:45.428 [2024-12-06 17:20:27.671649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.671681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.682948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef3a28 00:20:45.428 [2024-12-06 17:20:27.684398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.684432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.694202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee88f8 00:20:45.428 [2024-12-06 17:20:27.695446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.695483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.705980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ede8a8 00:20:45.428 [2024-12-06 17:20:27.707148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.707191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:45.428 [2024-12-06 17:20:27.720479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016edf550 00:20:45.428 [2024-12-06 17:20:27.722310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.428 [2024-12-06 17:20:27.722345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.729251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee8088 00:20:45.687 [2024-12-06 17:20:27.730105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.730139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.743752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee1f80 00:20:45.687 [2024-12-06 17:20:27.745279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.745314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.754948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee4de8 00:20:45.687 [2024-12-06 17:20:27.756169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.756203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.766727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eeaab8 00:20:45.687 [2024-12-06 17:20:27.767989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.768021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.779028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee8d30 00:20:45.687 [2024-12-06 17:20:27.780256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.780301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.792662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eeee38 00:20:45.687 [2024-12-06 17:20:27.794411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.794444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.801290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef0350 00:20:45.687 [2024-12-06 17:20:27.802029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.802067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.815756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef81e0 00:20:45.687 [2024-12-06 17:20:27.817181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.817214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.826974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee12d8 00:20:45.687 [2024-12-06 17:20:27.828118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.828151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:45.687 [2024-12-06 17:20:27.838706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ef3a28 00:20:45.687 [2024-12-06 17:20:27.839838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.687 [2024-12-06 17:20:27.839868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:45.688 [2024-12-06 17:20:27.853130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee7c50 00:20:45.688 [2024-12-06 17:20:27.854953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.688 [2024-12-06 17:20:27.854983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:45.688 [2024-12-06 17:20:27.861727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee0630 00:20:45.688 [2024-12-06 17:20:27.862569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.688 [2024-12-06 17:20:27.862600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:45.688 [2024-12-06 17:20:27.876292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016ee8d30 00:20:45.688 [2024-12-06 17:20:27.877809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.688 [2024-12-06 17:20:27.877841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:45.688 [2024-12-06 17:20:27.887551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af6570) with pdu=0x200016eeff18 00:20:45.688 [2024-12-06 17:20:27.888772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:45.688 [2024-12-06 17:20:27.888805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:45.688 20647.50 IOPS, 80.65 MiB/s 00:20:45.688 Latency(us) 00:20:45.688 [2024-12-06T17:20:27.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.688 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:45.688 nvme0n1 : 2.00 20660.54 80.71 0.00 0.00 6186.24 2502.28 17635.14 00:20:45.688 [2024-12-06T17:20:27.986Z] =================================================================================================================== 00:20:45.688 [2024-12-06T17:20:27.986Z] Total : 20660.54 80.71 0.00 0.00 6186.24 2502.28 17635.14 00:20:45.688 { 00:20:45.688 "results": [ 00:20:45.688 { 00:20:45.688 "job": "nvme0n1", 00:20:45.688 "core_mask": "0x2", 00:20:45.688 "workload": "randwrite", 00:20:45.688 "status": "finished", 00:20:45.688 "queue_depth": 128, 00:20:45.688 "io_size": 4096, 00:20:45.688 "runtime": 2.004933, 00:20:45.688 "iops": 20660.540776175563, 00:20:45.688 "mibps": 80.7052374069358, 00:20:45.688 "io_failed": 0, 00:20:45.688 "io_timeout": 0, 00:20:45.688 "avg_latency_us": 6186.238367880821, 00:20:45.688 "min_latency_us": 2502.2836363636366, 00:20:45.688 "max_latency_us": 17635.14181818182 00:20:45.688 } 00:20:45.688 ], 00:20:45.688 "core_count": 1 00:20:45.688 } 00:20:45.688 17:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:45.688 17:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:45.688 17:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:45.688 17:20:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:45.688 | .driver_specific 00:20:45.688 | .nvme_error 00:20:45.688 | .status_code 00:20:45.688 | .command_transient_transport_error' 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94303 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94303 ']' 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94303 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94303 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.255 killing process with pid 94303 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94303' 00:20:46.255 Received shutdown signal, test time was about 2.000000 seconds 00:20:46.255 00:20:46.255 Latency(us) 00:20:46.255 [2024-12-06T17:20:28.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.255 [2024-12-06T17:20:28.553Z] =================================================================================================================== 00:20:46.255 [2024-12-06T17:20:28.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94303 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94303 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94380 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94380 /var/tmp/bperf.sock 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 94380 ']' 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:46.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:46.255 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:46.255 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:46.255 Zero copy mechanism will not be used. 00:20:46.255 [2024-12-06 17:20:28.479098] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:46.255 [2024-12-06 17:20:28.479702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94380 ] 00:20:46.513 [2024-12-06 17:20:28.633170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.514 [2024-12-06 17:20:28.672867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.514 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.514 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:20:46.514 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:46.514 17:20:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:20:47.080 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:20:47.080 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.080 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.080 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.080 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.080 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:20:47.337 nvme0n1 00:20:47.337 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:20:47.337 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.337 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:47.337 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.337 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:20:47.337 17:20:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:20:47.597 I/O size of 131072 is greater than zero copy threshold (65536). 00:20:47.597 Zero copy mechanism will not be used. 00:20:47.597 Running I/O for 2 seconds... 00:20:47.597 [2024-12-06 17:20:29.686335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.686458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.686489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.691841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.691953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.691978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.697291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.697372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.697396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.702652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.702752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.702776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.709178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.709289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.709314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.714579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.714694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.714718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.719969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.720067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.720096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.725853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.725957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.725981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.731286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.731391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.731414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.736688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.736785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.736809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.742064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.742142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.742165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.747511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.747602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.747626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.752835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.752940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.752965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.758736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.758861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.758886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.764781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.764881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.764904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.770088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.770197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.770220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.775717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.775840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.775863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.781284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.781387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.781411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.786719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.786850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.786875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.792089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.792182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.792206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.797619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.797720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.797744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.803716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.803816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.803839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.809107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.809217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.809241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.814618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.814727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.814751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.820072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.820160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.820184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.825514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.825618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.825646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.830981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.831104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.831131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.837427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.837532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.837555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.843318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.843435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.843458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.849399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.849496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.849519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.854748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.854847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.854870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.860028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.860133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.860156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.866697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.866808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.866833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.871993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.872089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.872113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.877463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.877542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.877566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.882954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.883056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.883078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.597 [2024-12-06 17:20:29.888876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.597 [2024-12-06 17:20:29.888984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.597 [2024-12-06 17:20:29.889006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.855 [2024-12-06 17:20:29.894344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.855 [2024-12-06 17:20:29.894451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.855 [2024-12-06 17:20:29.894476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.855 [2024-12-06 17:20:29.899975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.855 [2024-12-06 17:20:29.900085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.855 [2024-12-06 17:20:29.900109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.855 [2024-12-06 17:20:29.906429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.855 [2024-12-06 17:20:29.906531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.855 [2024-12-06 17:20:29.906554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.855 [2024-12-06 17:20:29.911734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.855 [2024-12-06 17:20:29.911835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.855 [2024-12-06 17:20:29.911859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.855 [2024-12-06 17:20:29.917109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.855 [2024-12-06 17:20:29.917213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.855 [2024-12-06 17:20:29.917240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.855 [2024-12-06 17:20:29.922456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.855 [2024-12-06 17:20:29.922561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.855 [2024-12-06 17:20:29.922584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.927806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.927903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.927926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.933655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.933759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.933787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.939047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.939149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.939173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.944848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.944949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.944972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.950257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.950383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.950406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.955635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.955738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.955762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.960889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.960971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.960995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.966184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.966309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.966332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.972150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.972285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.972310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.977585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.977671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.977695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.982964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.983067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.983090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.988345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.988443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.988466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:29.994972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:29.995072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:29.995096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.000815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.000928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.000952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.006257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.006355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.006379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.011637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.011740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.011764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.016958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.017104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.017128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.022866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.022970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.022993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.028156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.028259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.028296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.033527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.033625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.033649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.039367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.039506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.039536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.045494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.045597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.045621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.050788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.050893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.050916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.056718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.056806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.056830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.062022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.062120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.062143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.067380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.067482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.067505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.072632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.072735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.072758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.077924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.078026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.078049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.083709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.083814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.083837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.089024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.089130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.089153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.094388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.094478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.094501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.099722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.099811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.099833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.105010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.105124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.105147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.110345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.110450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.110473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.115692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.115787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.115810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.121445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.121549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.121572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.126719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.126810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.126833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.132080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.132172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.132195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.137555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.137656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.137688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.142814] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.856 [2024-12-06 17:20:30.142920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.856 [2024-12-06 17:20:30.142943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:47.856 [2024-12-06 17:20:30.148139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:47.857 [2024-12-06 17:20:30.148241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.857 [2024-12-06 17:20:30.148278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.153502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.153626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.153650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.159251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.159363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.159387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.164635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.164731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.164753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.169931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.170023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.170046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.175191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.175284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.175308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.180493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.180590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.180613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.185763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.185860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.185882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.191029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.191132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.191154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.196919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.197033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.197056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.115 [2024-12-06 17:20:30.202207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.115 [2024-12-06 17:20:30.202325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.115 [2024-12-06 17:20:30.202348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.207594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.207689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.207711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.212888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.212988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.213011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.218224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.218327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.218350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.223626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.223725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.223747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.228957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.229059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.229081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.234349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.234433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.234455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.239841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.239941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.239963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.246354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.246458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.246481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.251690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.251792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.251814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.256936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.257039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.257061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.262249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.262382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.262405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.267660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.267739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.267762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.272972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.273090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.273112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.278320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.278422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.278444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.283752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.283853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.283877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.289137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.289218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.289241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.294439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.294519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.294541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.300407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.300511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.300534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.305683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.305784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.305807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.311023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.311119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.311142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.316368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.316463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.316485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.321834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.321952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.321975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.328441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.328548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.328572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.333742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.333851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.333874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.339153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.339243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.339278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.344514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.344624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.344646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.349910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.350055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.355288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.355385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.355410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.360977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.116 [2024-12-06 17:20:30.361080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.116 [2024-12-06 17:20:30.361104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.116 [2024-12-06 17:20:30.366480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.366583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.366607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.117 [2024-12-06 17:20:30.371793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.371894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.371917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.117 [2024-12-06 17:20:30.377122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.377223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.377246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.117 [2024-12-06 17:20:30.382507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.382602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.382625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.117 [2024-12-06 17:20:30.387827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.387907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.387930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.117 [2024-12-06 17:20:30.393189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.393298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.393322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.117 [2024-12-06 17:20:30.398969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.399090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.399114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.117 [2024-12-06 17:20:30.406093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.406217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.406240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.117 [2024-12-06 17:20:30.411648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.117 [2024-12-06 17:20:30.411774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.117 [2024-12-06 17:20:30.411818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.377 [2024-12-06 17:20:30.416968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.377 [2024-12-06 17:20:30.417075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.377 [2024-12-06 17:20:30.417100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.377 [2024-12-06 17:20:30.422364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.377 [2024-12-06 17:20:30.422480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.377 [2024-12-06 17:20:30.422505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.377 [2024-12-06 17:20:30.427786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.377 [2024-12-06 17:20:30.427891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.377 [2024-12-06 17:20:30.427914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.377 [2024-12-06 17:20:30.433074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.377 [2024-12-06 17:20:30.433175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.377 [2024-12-06 17:20:30.433198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.377 [2024-12-06 17:20:30.438434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.377 [2024-12-06 17:20:30.438541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.377 [2024-12-06 17:20:30.438564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.377 [2024-12-06 17:20:30.443768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.377 [2024-12-06 17:20:30.443870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.377 [2024-12-06 17:20:30.443892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.377 [2024-12-06 17:20:30.449590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.377 [2024-12-06 17:20:30.449690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.377 [2024-12-06 17:20:30.449713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.377 [2024-12-06 17:20:30.454908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.455010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.455032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.460324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.460421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.460444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.465642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.465738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.465761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.470934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.471034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.471058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.476331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.476428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.476451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.481668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.481763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.481786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.487500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.487598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.487621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.492817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.492923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.492946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.498109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.498189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.498212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.503502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.503581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.503604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.508790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.508871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.508894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.514081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.514177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.514200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.519358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.519456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.519478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.525147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.525255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.525291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.530499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.530595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.530618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.535905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.535994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.536016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.541294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.541395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.541417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.546565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.546655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.546678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.551939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.552036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.552059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.378 [2024-12-06 17:20:30.557263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.378 [2024-12-06 17:20:30.557394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.378 [2024-12-06 17:20:30.557419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.562622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.562732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.562755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.567942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.568043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.568066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.573260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.573377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.573421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.578669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.578774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.578804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.583961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.584071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.584098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.589899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.589999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.590023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.596493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.596585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.596609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.601792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.601894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.601917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.607128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.607221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.607246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.612428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.612544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.612569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.617734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.617847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.617871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.623094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.623189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.623214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.628393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.628496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.628519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.634940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.635041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.635064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.640709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.640813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.640836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.646202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.646323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.646346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.651639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.651739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.651763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.658366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.658503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.658532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.663797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.379 [2024-12-06 17:20:30.663915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.379 [2024-12-06 17:20:30.663945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.379 [2024-12-06 17:20:30.669179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.380 [2024-12-06 17:20:30.669308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.380 [2024-12-06 17:20:30.669333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.674555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.674719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.674744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.681158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.681319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.681349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.651 5594.00 IOPS, 699.25 MiB/s [2024-12-06T17:20:30.949Z] [2024-12-06 17:20:30.688420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.688593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.688635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.693836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.693944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.693979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.699369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.699493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.699521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.704773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.704944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.704977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.710204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.710391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.710423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.715566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.715668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.715691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.721420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.721526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.721550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.726748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.726848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.726871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.732128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.732231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.732254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.737436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.737516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.737539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.742888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.743081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.743112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.748422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.748555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.748585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.754322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.754428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.754452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.759706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.759803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.759826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.765047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.765160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.765193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.770421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.770732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.770778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.775258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.775622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.775660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.780338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.780682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.780719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.785384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.785721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.785756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.790417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.790814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.790847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.795549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.795895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.795932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.800637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.800990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.801026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.805831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.806159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.806195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.812790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.813089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.813124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.819584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.651 [2024-12-06 17:20:30.819915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.651 [2024-12-06 17:20:30.819951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.651 [2024-12-06 17:20:30.824971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.825340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.825376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.830625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.830986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.831021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.835748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.836108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.836143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.841020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.841388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.841423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.846182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.846505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.846541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.852457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.852812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.852847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.857985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.858306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.858344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.863030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.863361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.863399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.867793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.868117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.868157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.872742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.873062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.873095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.877661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.877896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.877935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.882470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.882691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.882724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.887286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.887515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.887548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.892059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.892216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.892248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.896848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.897023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.897051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.902938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.903112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.903145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.907699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.907880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.907911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.912456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.912621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.912652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.917168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.917355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.917378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.921937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.922121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.922143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.926772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.927038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.927072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.931561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.931844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.931875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.936287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.936472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.936501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.940934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.941112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.941134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.652 [2024-12-06 17:20:30.946147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.652 [2024-12-06 17:20:30.946346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.652 [2024-12-06 17:20:30.946371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.950886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.951078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.966 [2024-12-06 17:20:30.951125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.955559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.955748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.966 [2024-12-06 17:20:30.955772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.960248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.960408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.966 [2024-12-06 17:20:30.960431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.964925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.965083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.966 [2024-12-06 17:20:30.965105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.969735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.969845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.966 [2024-12-06 17:20:30.969867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.975702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.975913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.966 [2024-12-06 17:20:30.975940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.980479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.980672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.966 [2024-12-06 17:20:30.980699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.985287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.985472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.966 [2024-12-06 17:20:30.985500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.966 [2024-12-06 17:20:30.990074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.966 [2024-12-06 17:20:30.990217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:30.990241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:30.994840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:30.995013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:30.995036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:30.999596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:30.999825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:30.999849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.004364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.004616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.004649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.009132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.009291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.009314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.013870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.014099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.014121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.019176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.019435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.019464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.023981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.024187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.024209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.028728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.028914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.028936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.033530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.033732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.033769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.038342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.038540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.038572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.043039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.043289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.043322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.047838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.048042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.048064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.052597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.052772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.052794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.057376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.057560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.057588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.062145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.062396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.062420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.068222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.068379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.068402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.073018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.073201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.073223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.077764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.077995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.078017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.082550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.082754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.082791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.087373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.087555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.087577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.092105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.092249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.092284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.097491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.097644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.097666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.102230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.102435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.102457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.107013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.107280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.107315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.111840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.111989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.967 [2024-12-06 17:20:31.112011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.967 [2024-12-06 17:20:31.116609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.967 [2024-12-06 17:20:31.116754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.116776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.121430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.121606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.121629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.126204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.126452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.126482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.131019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.131167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.131190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.135774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.135963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.135985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.140831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.141064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.141086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.145717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.145899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.145928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.150587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.150780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.150802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.155323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.155556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.155578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.160145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.160381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.160404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.164944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.165150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.165173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.169690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.169837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.169861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.174469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.174620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.174655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.179235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.179432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.179455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.184231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.184444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.184467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.189500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.189742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.189766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.194243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.194494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.194516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.199134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.199320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.199342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:48.968 [2024-12-06 17:20:31.203940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:48.968 [2024-12-06 17:20:31.204160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.968 [2024-12-06 17:20:31.204186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.208691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.208861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.208892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.213454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.213653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.213676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.218199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.218440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.218464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.223601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.223776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.223804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.228400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.228584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.228607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.233141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.233381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.233404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.237890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.238137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.238165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.242712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.242900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.242922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.247466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.247666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.247688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.228 [2024-12-06 17:20:31.252252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.228 [2024-12-06 17:20:31.252491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.228 [2024-12-06 17:20:31.252513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.257023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.257167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.257189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.261843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.261947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.261969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.269174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.269406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.269430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.276241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.276409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.276432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.283487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.283629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.283652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.290044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.290199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.290222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.294772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.295005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.295035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.299576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.299730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.299763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.304378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.304546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.304570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.309169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.309387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.309418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.314001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.314189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.314222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.318800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.319022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.319050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.323577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.323833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.323866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.328388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.328608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.328630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.333106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.333333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.333356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.337830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.337972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.337994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.342569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.342765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.342787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.347249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.347502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.347530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.352062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.352208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.352231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.358020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.358245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.358286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.362844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.363092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.363121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.229 [2024-12-06 17:20:31.367660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.229 [2024-12-06 17:20:31.367899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.229 [2024-12-06 17:20:31.367927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.373393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.373563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.373586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.378152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.378350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.378382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.382952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.383119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.383143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.387721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.387947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.387982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.392406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.392622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.392647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.397162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.397377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.397401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.401972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.402193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.402215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.406783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.406974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.406997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.411590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.411750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.411772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.416324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.416508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.416530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.421028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.421222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.421245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.425792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.426012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.426044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.430496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.430663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.430686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.435210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.435415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.435437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.439878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.440115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.440148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.444677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.444820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.444844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.449431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.449586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.449609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.454090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.454302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.454325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.458812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.459061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.459090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.463538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.463725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.463747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.468248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.468506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.230 [2024-12-06 17:20:31.468528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.230 [2024-12-06 17:20:31.473034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.230 [2024-12-06 17:20:31.473221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.473243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.477742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.477902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.477924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.482430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.482619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.482661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.487171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.487340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.487362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.491862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.492040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.492061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.496541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.496782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.496806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.501220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.501484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.501521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.505981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.506198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.506220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.510751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.510904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.510927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.515472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.515644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.515666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.231 [2024-12-06 17:20:31.520171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.231 [2024-12-06 17:20:31.520428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.231 [2024-12-06 17:20:31.520458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.490 [2024-12-06 17:20:31.524934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.490 [2024-12-06 17:20:31.525082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-12-06 17:20:31.525117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.490 [2024-12-06 17:20:31.529724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.490 [2024-12-06 17:20:31.529875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-12-06 17:20:31.529905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.490 [2024-12-06 17:20:31.534355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.490 [2024-12-06 17:20:31.534560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-12-06 17:20:31.534583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.490 [2024-12-06 17:20:31.539037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.490 [2024-12-06 17:20:31.539191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-12-06 17:20:31.539214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.490 [2024-12-06 17:20:31.543631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.490 [2024-12-06 17:20:31.543873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-12-06 17:20:31.543901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.490 [2024-12-06 17:20:31.548415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.490 [2024-12-06 17:20:31.548635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-12-06 17:20:31.548657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.490 [2024-12-06 17:20:31.553116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.490 [2024-12-06 17:20:31.553285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.490 [2024-12-06 17:20:31.553308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.490 [2024-12-06 17:20:31.557843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.490 [2024-12-06 17:20:31.558085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.558114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.562577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.562825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.562852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.567222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.567482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.567505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.571913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.572120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.572142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.576662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.576872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.576894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.581355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.581556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.581578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.586085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.586255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.586291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.590768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.590940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.590961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.595780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.595965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.595987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.600530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.600725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.600749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.605277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.605525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.605549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.609967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.610224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.610247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.614698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.614883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.614907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.619463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.619625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.619648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.624086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.624353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.624376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.628852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.629048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.629070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.633564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.633739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.633761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.638216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.638464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.638494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.642932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.643135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.643162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.647631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.647877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.647907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.652418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.652590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.652625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.657077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.657306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.657342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.661836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.662079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.662107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.666567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.666782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.666805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.671356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.671508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.671531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.675995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.676226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.676249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:49.491 [2024-12-06 17:20:31.680957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.681107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.681129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:49.491 5913.00 IOPS, 739.12 MiB/s [2024-12-06T17:20:31.789Z] [2024-12-06 17:20:31.686885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1af68b0) with pdu=0x200016eff3c8 00:20:49.491 [2024-12-06 17:20:31.686954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.491 [2024-12-06 17:20:31.686977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:49.492 00:20:49.492 Latency(us) 00:20:49.492 [2024-12-06T17:20:31.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.492 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:20:49.492 nvme0n1 : 2.00 5910.32 738.79 0.00 0.00 2700.89 1817.13 8340.95 00:20:49.492 [2024-12-06T17:20:31.790Z] =================================================================================================================== 00:20:49.492 [2024-12-06T17:20:31.790Z] Total : 5910.32 738.79 0.00 0.00 2700.89 1817.13 8340.95 00:20:49.492 { 00:20:49.492 "results": [ 00:20:49.492 { 00:20:49.492 "job": "nvme0n1", 00:20:49.492 "core_mask": "0x2", 00:20:49.492 "workload": "randwrite", 00:20:49.492 "status": "finished", 00:20:49.492 "queue_depth": 16, 00:20:49.492 "io_size": 131072, 00:20:49.492 "runtime": 2.004291, 00:20:49.492 "iops": 5910.319409706475, 00:20:49.492 "mibps": 738.7899262133094, 00:20:49.492 "io_failed": 0, 00:20:49.492 "io_timeout": 0, 00:20:49.492 "avg_latency_us": 2700.8918257025766, 00:20:49.492 "min_latency_us": 1817.1345454545456, 00:20:49.492 "max_latency_us": 8340.945454545454 00:20:49.492 } 00:20:49.492 ], 00:20:49.492 "core_count": 1 00:20:49.492 } 00:20:49.492 17:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:20:49.492 17:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:20:49.492 17:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:20:49.492 17:20:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:20:49.492 | .driver_specific 00:20:49.492 | .nvme_error 00:20:49.492 | .status_code 00:20:49.492 | .command_transient_transport_error' 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 383 > 0 )) 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94380 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94380 ']' 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94380 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94380 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:50.059 killing process with pid 94380 00:20:50.059 Received shutdown signal, test time was about 2.000000 seconds 00:20:50.059 00:20:50.059 Latency(us) 00:20:50.059 [2024-12-06T17:20:32.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.059 [2024-12-06T17:20:32.357Z] =================================================================================================================== 00:20:50.059 [2024-12-06T17:20:32.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94380' 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94380 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94380 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94111 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 94111 ']' 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 94111 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94111 00:20:50.059 killing process with pid 94111 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94111' 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 94111 00:20:50.059 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 94111 00:20:50.335 ************************************ 00:20:50.335 END TEST nvmf_digest_error 00:20:50.335 ************************************ 00:20:50.335 00:20:50.335 real 0m16.429s 00:20:50.335 user 0m33.022s 00:20:50.335 sys 0m4.146s 00:20:50.335 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.335 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:50.336 rmmod nvme_tcp 00:20:50.336 rmmod nvme_fabrics 00:20:50.336 rmmod nvme_keyring 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 94111 ']' 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 94111 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 94111 ']' 00:20:50.336 Process with pid 94111 is not found 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 94111 00:20:50.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94111) - No such process 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 94111 is not found' 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:50.336 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.593 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:20:50.593 ************************************ 00:20:50.593 END TEST nvmf_digest 00:20:50.593 ************************************ 00:20:50.593 00:20:50.593 real 0m33.695s 00:20:50.594 user 1m5.611s 00:20:50.594 sys 0m8.665s 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.594 ************************************ 00:20:50.594 START TEST nvmf_mdns_discovery 00:20:50.594 ************************************ 00:20:50.594 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:20:50.853 * Looking for test storage... 00:20:50.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.853 --rc genhtml_branch_coverage=1 00:20:50.853 --rc genhtml_function_coverage=1 00:20:50.853 --rc genhtml_legend=1 00:20:50.853 --rc geninfo_all_blocks=1 00:20:50.853 --rc geninfo_unexecuted_blocks=1 00:20:50.853 00:20:50.853 ' 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.853 --rc genhtml_branch_coverage=1 00:20:50.853 --rc genhtml_function_coverage=1 00:20:50.853 --rc genhtml_legend=1 00:20:50.853 --rc geninfo_all_blocks=1 00:20:50.853 --rc geninfo_unexecuted_blocks=1 00:20:50.853 00:20:50.853 ' 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.853 --rc genhtml_branch_coverage=1 00:20:50.853 --rc genhtml_function_coverage=1 00:20:50.853 --rc genhtml_legend=1 00:20:50.853 --rc geninfo_all_blocks=1 00:20:50.853 --rc geninfo_unexecuted_blocks=1 00:20:50.853 00:20:50.853 ' 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:50.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.853 --rc genhtml_branch_coverage=1 00:20:50.853 --rc genhtml_function_coverage=1 00:20:50.853 --rc genhtml_legend=1 00:20:50.853 --rc geninfo_all_blocks=1 00:20:50.853 --rc geninfo_unexecuted_blocks=1 00:20:50.853 00:20:50.853 ' 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.853 17:20:32 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.853 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:50.854 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:50.854 Cannot find device "nvmf_init_br" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:50.854 Cannot find device "nvmf_init_br2" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:50.854 Cannot find device "nvmf_tgt_br" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.854 Cannot find device "nvmf_tgt_br2" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:50.854 Cannot find device "nvmf_init_br" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:50.854 Cannot find device "nvmf_init_br2" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:50.854 Cannot find device "nvmf_tgt_br" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:50.854 Cannot find device "nvmf_tgt_br2" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:50.854 Cannot find device "nvmf_br" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:50.854 Cannot find device "nvmf_init_if" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:50.854 Cannot find device "nvmf_init_if2" 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.854 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.854 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:20:50.855 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:50.855 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:20:50.855 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:51.113 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:51.114 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:51.114 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:20:51.114 00:20:51.114 --- 10.0.0.3 ping statistics --- 00:20:51.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.114 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:51.114 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:51.114 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:20:51.114 00:20:51.114 --- 10.0.0.4 ping statistics --- 00:20:51.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.114 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:51.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:51.114 00:20:51.114 --- 10.0.0.1 ping statistics --- 00:20:51.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.114 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:51.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:20:51.114 00:20:51.114 --- 10.0.0.2 ping statistics --- 00:20:51.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.114 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:51.114 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=94711 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 94711 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94711 ']' 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.372 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.372 [2024-12-06 17:20:33.492848] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:51.372 [2024-12-06 17:20:33.493137] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.372 [2024-12-06 17:20:33.633886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.372 [2024-12-06 17:20:33.665825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.372 [2024-12-06 17:20:33.666088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.372 [2024-12-06 17:20:33.666225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.372 [2024-12-06 17:20:33.666369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.372 [2024-12-06 17:20:33.666413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.372 [2024-12-06 17:20:33.666816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 [2024-12-06 17:20:33.873148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 [2024-12-06 17:20:33.881323] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 null0 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 null1 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 null2 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.631 null3 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.631 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.890 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94743 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94743 /tmp/host.sock 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94743 ']' 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.890 17:20:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:51.890 [2024-12-06 17:20:33.980555] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:20:51.890 [2024-12-06 17:20:33.980932] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94743 ] 00:20:51.890 [2024-12-06 17:20:34.128896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.890 [2024-12-06 17:20:34.177543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94760 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:20:52.149 17:20:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:20:52.149 Process 1067 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:20:52.149 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:20:52.149 Successfully dropped root privileges. 00:20:52.149 avahi-daemon 0.8 starting up. 00:20:52.149 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:20:52.149 Successfully called chroot(). 00:20:53.085 Successfully dropped remaining capabilities. 00:20:53.085 No service file found in /etc/avahi/services. 00:20:53.085 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:20:53.085 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:20:53.085 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:20:53.085 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:20:53.085 Network interface enumeration completed. 00:20:53.085 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:20:53.085 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:20:53.085 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:20:53.085 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:20:53.085 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 737344944. 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:53.342 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.600 [2024-12-06 17:20:35.699934] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.600 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.601 [2024-12-06 17:20:35.745807] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.601 17:20:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:20:54.534 [2024-12-06 17:20:36.599930] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:20:54.791 [2024-12-06 17:20:36.999956] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:54.791 [2024-12-06 17:20:37.000008] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:20:54.791 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:54.791 cookie is 0 00:20:54.791 is_local: 1 00:20:54.791 our_own: 0 00:20:54.791 wide_area: 0 00:20:54.791 multicast: 1 00:20:54.791 cached: 1 00:20:55.049 [2024-12-06 17:20:37.099942] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:20:55.049 [2024-12-06 17:20:37.099991] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:20:55.049 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:55.049 cookie is 0 00:20:55.049 is_local: 1 00:20:55.049 our_own: 0 00:20:55.049 wide_area: 0 00:20:55.049 multicast: 1 00:20:55.049 cached: 1 00:20:55.979 [2024-12-06 17:20:38.001052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:55.979 [2024-12-06 17:20:38.001146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117e850 with addr=10.0.0.4, port=8009 00:20:55.979 [2024-12-06 17:20:38.001195] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:55.979 [2024-12-06 17:20:38.001225] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:55.979 [2024-12-06 17:20:38.001242] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:20:55.979 [2024-12-06 17:20:38.108205] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:55.979 [2024-12-06 17:20:38.108276] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:55.979 [2024-12-06 17:20:38.108314] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:55.979 [2024-12-06 17:20:38.196354] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns1_nvme0 00:20:55.979 [2024-12-06 17:20:38.256980] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:55.979 [2024-12-06 17:20:38.258258] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11b3820:1 started. 00:20:55.979 [2024-12-06 17:20:38.260252] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:20:55.979 [2024-12-06 17:20:38.260301] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:55.979 [2024-12-06 17:20:38.267059] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11b3820 was disconnected and freed. delete nvme_qpair. 00:20:56.911 [2024-12-06 17:20:39.000910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:56.911 [2024-12-06 17:20:39.000980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119c3a0 with addr=10.0.0.4, port=8009 00:20:56.911 [2024-12-06 17:20:39.001003] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:56.911 [2024-12-06 17:20:39.001014] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:56.911 [2024-12-06 17:20:39.001023] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:20:57.844 [2024-12-06 17:20:40.000943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:57.844 [2024-12-06 17:20:40.001050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119c580 with addr=10.0.0.4, port=8009 00:20:57.844 [2024-12-06 17:20:40.001081] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:20:57.844 [2024-12-06 17:20:40.001096] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:57.844 [2024-12-06 17:20:40.001115] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:58.778 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:58.778 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:58.778 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.778 [2024-12-06 17:20:40.819809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:20:58.778 [2024-12-06 17:20:40.821907] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:58.778 [2024-12-06 17:20:40.821950] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:58.778 [2024-12-06 17:20:40.827886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:20:58.778 [2024-12-06 17:20:40.828899] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.778 17:20:40 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:20:58.778 [2024-12-06 17:20:40.960012] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:58.778 [2024-12-06 17:20:40.960090] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:58.778 [2024-12-06 17:20:41.008320] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:20:58.778 [2024-12-06 17:20:41.008359] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:20:58.778 [2024-12-06 17:20:41.008378] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:20:58.778 [2024-12-06 17:20:41.051477] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:59.037 [2024-12-06 17:20:41.094460] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns0_nvme0 00:20:59.037 [2024-12-06 17:20:41.148986] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:20:59.037 [2024-12-06 17:20:41.149751] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x11b0b40:1 started. 00:20:59.037 [2024-12-06 17:20:41.151347] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:20:59.037 [2024-12-06 17:20:41.151385] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:20:59.037 [2024-12-06 17:20:41.157007] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x11b0b40 was disconnected and freed. delete nvme_qpair. 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:20:59.604 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:59.604 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:20:59.604 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:20:59.604 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:59.604 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:59.604 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:20:59.604 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:20:59.604 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:20:59.863 17:20:41 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:20:59.863 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.121 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.122 [2024-12-06 17:20:42.239119] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11b87e0:1 started. 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:00.122 [2024-12-06 17:20:42.247235] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11b87e0 was disconnected and freed. delete nvme_qpair. 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.122 17:20:42 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:21:00.122 [2024-12-06 17:20:42.252434] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x11b25e0:1 started. 00:21:00.122 [2024-12-06 17:20:42.257131] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x11b25e0 was disconnected and freed. delete nvme_qpair. 00:21:00.122 [2024-12-06 17:20:42.399975] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:00.122 [2024-12-06 17:20:42.400011] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:00.122 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:00.122 cookie is 0 00:21:00.122 is_local: 1 00:21:00.122 our_own: 0 00:21:00.122 wide_area: 0 00:21:00.122 multicast: 1 00:21:00.122 cached: 1 00:21:00.122 [2024-12-06 17:20:42.400026] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:00.380 [2024-12-06 17:20:42.499979] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:00.380 [2024-12-06 17:20:42.500029] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:00.380 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:00.380 cookie is 0 00:21:00.380 is_local: 1 00:21:00.380 our_own: 0 00:21:00.380 wide_area: 0 00:21:00.380 multicast: 1 00:21:00.380 cached: 1 00:21:00.380 [2024-12-06 17:20:42.500044] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.314 [2024-12-06 17:20:43.377127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:01.314 [2024-12-06 17:20:43.378197] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:01.314 [2024-12-06 17:20:43.378241] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:01.314 [2024-12-06 17:20:43.378295] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:01.314 [2024-12-06 17:20:43.378313] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:01.314 [2024-12-06 17:20:43.385045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:21:01.314 [2024-12-06 17:20:43.386187] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:01.314 [2024-12-06 17:20:43.386253] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.314 17:20:43 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:21:01.314 [2024-12-06 17:20:43.520312] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns1_nvme0 00:21:01.314 [2024-12-06 17:20:43.520763] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns0_nvme0 00:21:01.314 [2024-12-06 17:20:43.581804] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:21:01.314 [2024-12-06 17:20:43.581890] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:01.314 [2024-12-06 17:20:43.581904] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:01.314 [2024-12-06 17:20:43.581910] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:01.314 [2024-12-06 17:20:43.581934] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:01.314 [2024-12-06 17:20:43.582066] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:21:01.314 [2024-12-06 17:20:43.582095] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:01.314 [2024-12-06 17:20:43.582104] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:01.314 [2024-12-06 17:20:43.582110] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:01.314 [2024-12-06 17:20:43.582126] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:01.574 [2024-12-06 17:20:43.628420] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:21:01.574 [2024-12-06 17:20:43.628466] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:01.574 [2024-12-06 17:20:43.628516] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:21:01.574 [2024-12-06 17:20:43.628537] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:02.141 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:21:02.141 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:02.141 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:02.141 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:02.141 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.141 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.141 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:02.141 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:21:02.399 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.400 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.662 [2024-12-06 17:20:44.698025] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:02.662 [2024-12-06 17:20:44.698206] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:02.662 [2024-12-06 17:20:44.698283] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:02.662 [2024-12-06 17:20:44.698304] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:02.662 [2024-12-06 17:20:44.698380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.662 [2024-12-06 17:20:44.698415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.662 [2024-12-06 17:20:44.698429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.662 [2024-12-06 17:20:44.698438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.662 [2024-12-06 17:20:44.698448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.662 [2024-12-06 17:20:44.698457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.662 [2024-12-06 17:20:44.698476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.662 [2024-12-06 17:20:44.698485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.662 [2024-12-06 17:20:44.698495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.662 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.662 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:21:02.662 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.662 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:02.662 [2024-12-06 17:20:44.706012] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:21:02.662 [2024-12-06 17:20:44.706097] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:21:02.662 [2024-12-06 17:20:44.708327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.662 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.662 17:20:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:21:02.662 [2024-12-06 17:20:44.712139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.662 [2024-12-06 17:20:44.712177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.662 [2024-12-06 17:20:44.712192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.662 [2024-12-06 17:20:44.712201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.662 [2024-12-06 17:20:44.712211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.662 [2024-12-06 17:20:44.712221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.662 [2024-12-06 17:20:44.712231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:02.662 [2024-12-06 17:20:44.712243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:02.662 [2024-12-06 17:20:44.712259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.662 [2024-12-06 17:20:44.718347] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.662 [2024-12-06 17:20:44.718388] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.662 [2024-12-06 17:20:44.718397] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.662 [2024-12-06 17:20:44.718403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.662 [2024-12-06 17:20:44.718458] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.662 [2024-12-06 17:20:44.718552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.662 [2024-12-06 17:20:44.718576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.662 [2024-12-06 17:20:44.718593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.662 [2024-12-06 17:20:44.718612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.662 [2024-12-06 17:20:44.718629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.662 [2024-12-06 17:20:44.718653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.662 [2024-12-06 17:20:44.718665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.662 [2024-12-06 17:20:44.718675] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.662 [2024-12-06 17:20:44.718694] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.662 [2024-12-06 17:20:44.718704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.662 [2024-12-06 17:20:44.722094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.662 [2024-12-06 17:20:44.728455] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.662 [2024-12-06 17:20:44.728610] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.662 [2024-12-06 17:20:44.728623] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.662 [2024-12-06 17:20:44.728629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.662 [2024-12-06 17:20:44.728671] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.662 [2024-12-06 17:20:44.728753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.662 [2024-12-06 17:20:44.728777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.662 [2024-12-06 17:20:44.728789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.662 [2024-12-06 17:20:44.728807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.662 [2024-12-06 17:20:44.728822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.663 [2024-12-06 17:20:44.728831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.663 [2024-12-06 17:20:44.728842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.663 [2024-12-06 17:20:44.728851] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.663 [2024-12-06 17:20:44.728858] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.663 [2024-12-06 17:20:44.728863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.663 [2024-12-06 17:20:44.732109] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.663 [2024-12-06 17:20:44.732262] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.663 [2024-12-06 17:20:44.732294] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.663 [2024-12-06 17:20:44.732300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.663 [2024-12-06 17:20:44.732335] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.663 [2024-12-06 17:20:44.732401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.663 [2024-12-06 17:20:44.732424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.663 [2024-12-06 17:20:44.732435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.663 [2024-12-06 17:20:44.732452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.663 [2024-12-06 17:20:44.732481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.663 [2024-12-06 17:20:44.732492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.663 [2024-12-06 17:20:44.732503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.663 [2024-12-06 17:20:44.732512] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.663 [2024-12-06 17:20:44.732518] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.663 [2024-12-06 17:20:44.732523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.663 [2024-12-06 17:20:44.738687] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.663 [2024-12-06 17:20:44.738715] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.663 [2024-12-06 17:20:44.738722] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.663 [2024-12-06 17:20:44.738728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.663 [2024-12-06 17:20:44.738763] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.663 [2024-12-06 17:20:44.738826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.663 [2024-12-06 17:20:44.738848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.663 [2024-12-06 17:20:44.738859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.663 [2024-12-06 17:20:44.738875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.663 [2024-12-06 17:20:44.738890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.663 [2024-12-06 17:20:44.738899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.663 [2024-12-06 17:20:44.738909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.663 [2024-12-06 17:20:44.738918] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.663 [2024-12-06 17:20:44.738924] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.663 [2024-12-06 17:20:44.738929] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.663 [2024-12-06 17:20:44.742344] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.663 [2024-12-06 17:20:44.742370] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.663 [2024-12-06 17:20:44.742376] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.663 [2024-12-06 17:20:44.742381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.663 [2024-12-06 17:20:44.742413] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.663 [2024-12-06 17:20:44.742492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.663 [2024-12-06 17:20:44.742514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.663 [2024-12-06 17:20:44.742525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.663 [2024-12-06 17:20:44.742541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.663 [2024-12-06 17:20:44.742567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.663 [2024-12-06 17:20:44.742578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.663 [2024-12-06 17:20:44.742588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.663 [2024-12-06 17:20:44.742598] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.663 [2024-12-06 17:20:44.742604] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.663 [2024-12-06 17:20:44.742609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.663 [2024-12-06 17:20:44.748776] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.663 [2024-12-06 17:20:44.748803] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.663 [2024-12-06 17:20:44.748810] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.663 [2024-12-06 17:20:44.748815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.663 [2024-12-06 17:20:44.748848] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.663 [2024-12-06 17:20:44.748908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.663 [2024-12-06 17:20:44.748930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.663 [2024-12-06 17:20:44.748942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.663 [2024-12-06 17:20:44.748957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.663 [2024-12-06 17:20:44.748972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.663 [2024-12-06 17:20:44.748981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.663 [2024-12-06 17:20:44.748991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.663 [2024-12-06 17:20:44.748999] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.663 [2024-12-06 17:20:44.749005] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.663 [2024-12-06 17:20:44.749010] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.663 [2024-12-06 17:20:44.752426] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.663 [2024-12-06 17:20:44.752455] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.663 [2024-12-06 17:20:44.752461] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.752467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.664 [2024-12-06 17:20:44.752500] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.752561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.664 [2024-12-06 17:20:44.752584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.664 [2024-12-06 17:20:44.752595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.664 [2024-12-06 17:20:44.752611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.664 [2024-12-06 17:20:44.752637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.664 [2024-12-06 17:20:44.752649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.664 [2024-12-06 17:20:44.752659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.664 [2024-12-06 17:20:44.752668] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.664 [2024-12-06 17:20:44.752674] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.664 [2024-12-06 17:20:44.752680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.664 [2024-12-06 17:20:44.758860] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.664 [2024-12-06 17:20:44.758889] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.664 [2024-12-06 17:20:44.758895] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.758901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.664 [2024-12-06 17:20:44.758935] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.758996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.664 [2024-12-06 17:20:44.759017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.664 [2024-12-06 17:20:44.759029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.664 [2024-12-06 17:20:44.759045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.664 [2024-12-06 17:20:44.759077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.664 [2024-12-06 17:20:44.759088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.664 [2024-12-06 17:20:44.759098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.664 [2024-12-06 17:20:44.759107] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.664 [2024-12-06 17:20:44.759113] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.664 [2024-12-06 17:20:44.759118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.664 [2024-12-06 17:20:44.762511] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.664 [2024-12-06 17:20:44.762686] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.664 [2024-12-06 17:20:44.762700] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.762706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.664 [2024-12-06 17:20:44.762743] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.762822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.664 [2024-12-06 17:20:44.762852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.664 [2024-12-06 17:20:44.762869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.664 [2024-12-06 17:20:44.762887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.664 [2024-12-06 17:20:44.762902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.664 [2024-12-06 17:20:44.762911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.664 [2024-12-06 17:20:44.762922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.664 [2024-12-06 17:20:44.762931] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.664 [2024-12-06 17:20:44.762937] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.664 [2024-12-06 17:20:44.762942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.664 [2024-12-06 17:20:44.768946] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.664 [2024-12-06 17:20:44.768975] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.664 [2024-12-06 17:20:44.768981] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.768987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.664 [2024-12-06 17:20:44.769020] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.769083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.664 [2024-12-06 17:20:44.769105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.664 [2024-12-06 17:20:44.769117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.664 [2024-12-06 17:20:44.769133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.664 [2024-12-06 17:20:44.769165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.664 [2024-12-06 17:20:44.769177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.664 [2024-12-06 17:20:44.769187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.664 [2024-12-06 17:20:44.769196] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.664 [2024-12-06 17:20:44.769204] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.664 [2024-12-06 17:20:44.769213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.664 [2024-12-06 17:20:44.772754] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.664 [2024-12-06 17:20:44.772907] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.664 [2024-12-06 17:20:44.772920] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.772926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.664 [2024-12-06 17:20:44.772963] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.664 [2024-12-06 17:20:44.773031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.664 [2024-12-06 17:20:44.773053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.664 [2024-12-06 17:20:44.773065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.664 [2024-12-06 17:20:44.773081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.664 [2024-12-06 17:20:44.773096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.664 [2024-12-06 17:20:44.773105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.664 [2024-12-06 17:20:44.773116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.665 [2024-12-06 17:20:44.773125] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.665 [2024-12-06 17:20:44.773131] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.665 [2024-12-06 17:20:44.773136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.665 [2024-12-06 17:20:44.779032] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.665 [2024-12-06 17:20:44.779058] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.665 [2024-12-06 17:20:44.779065] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.779070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.665 [2024-12-06 17:20:44.779105] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.779165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.665 [2024-12-06 17:20:44.779186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.665 [2024-12-06 17:20:44.779197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.665 [2024-12-06 17:20:44.779214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.665 [2024-12-06 17:20:44.779245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.665 [2024-12-06 17:20:44.779256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.665 [2024-12-06 17:20:44.779284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.665 [2024-12-06 17:20:44.779296] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.665 [2024-12-06 17:20:44.779302] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.665 [2024-12-06 17:20:44.779307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.665 [2024-12-06 17:20:44.782974] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.665 [2024-12-06 17:20:44.783122] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.665 [2024-12-06 17:20:44.783135] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.783141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.665 [2024-12-06 17:20:44.783177] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.783242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.665 [2024-12-06 17:20:44.783280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.665 [2024-12-06 17:20:44.783304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.665 [2024-12-06 17:20:44.783321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.665 [2024-12-06 17:20:44.783358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.665 [2024-12-06 17:20:44.783370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.665 [2024-12-06 17:20:44.783380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.665 [2024-12-06 17:20:44.783390] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.665 [2024-12-06 17:20:44.783396] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.665 [2024-12-06 17:20:44.783401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.665 [2024-12-06 17:20:44.789115] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.665 [2024-12-06 17:20:44.789141] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.665 [2024-12-06 17:20:44.789148] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.789153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.665 [2024-12-06 17:20:44.789191] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.789249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.665 [2024-12-06 17:20:44.789290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.665 [2024-12-06 17:20:44.789305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.665 [2024-12-06 17:20:44.789340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.665 [2024-12-06 17:20:44.789386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.665 [2024-12-06 17:20:44.789399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.665 [2024-12-06 17:20:44.789410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.665 [2024-12-06 17:20:44.789420] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.665 [2024-12-06 17:20:44.789426] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.665 [2024-12-06 17:20:44.789430] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.665 [2024-12-06 17:20:44.793188] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.665 [2024-12-06 17:20:44.793372] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.665 [2024-12-06 17:20:44.793386] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.793392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.665 [2024-12-06 17:20:44.793436] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.793590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.665 [2024-12-06 17:20:44.793622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.665 [2024-12-06 17:20:44.793634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.665 [2024-12-06 17:20:44.793651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.665 [2024-12-06 17:20:44.793666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.665 [2024-12-06 17:20:44.793676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.665 [2024-12-06 17:20:44.793686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.665 [2024-12-06 17:20:44.793695] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.665 [2024-12-06 17:20:44.793701] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.665 [2024-12-06 17:20:44.793706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.665 [2024-12-06 17:20:44.799203] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.665 [2024-12-06 17:20:44.799230] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.665 [2024-12-06 17:20:44.799237] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.665 [2024-12-06 17:20:44.799242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.666 [2024-12-06 17:20:44.799290] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.799351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.666 [2024-12-06 17:20:44.799374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.666 [2024-12-06 17:20:44.799385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.666 [2024-12-06 17:20:44.799401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.666 [2024-12-06 17:20:44.799434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.666 [2024-12-06 17:20:44.799445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.666 [2024-12-06 17:20:44.799455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.666 [2024-12-06 17:20:44.799464] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.666 [2024-12-06 17:20:44.799469] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.666 [2024-12-06 17:20:44.799474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.666 [2024-12-06 17:20:44.803445] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.666 [2024-12-06 17:20:44.803471] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.666 [2024-12-06 17:20:44.803477] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.803483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.666 [2024-12-06 17:20:44.803514] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.803571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.666 [2024-12-06 17:20:44.803592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.666 [2024-12-06 17:20:44.803603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.666 [2024-12-06 17:20:44.803619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.666 [2024-12-06 17:20:44.803633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.666 [2024-12-06 17:20:44.803643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.666 [2024-12-06 17:20:44.803653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.666 [2024-12-06 17:20:44.803662] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.666 [2024-12-06 17:20:44.803668] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.666 [2024-12-06 17:20:44.803673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.666 [2024-12-06 17:20:44.809301] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.666 [2024-12-06 17:20:44.809333] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.666 [2024-12-06 17:20:44.809344] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.809353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.666 [2024-12-06 17:20:44.809387] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.809446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.666 [2024-12-06 17:20:44.809468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.666 [2024-12-06 17:20:44.809479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.666 [2024-12-06 17:20:44.809494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.666 [2024-12-06 17:20:44.809528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.666 [2024-12-06 17:20:44.809539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.666 [2024-12-06 17:20:44.809550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.666 [2024-12-06 17:20:44.809559] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.666 [2024-12-06 17:20:44.809564] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.666 [2024-12-06 17:20:44.809569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.666 [2024-12-06 17:20:44.813524] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.666 [2024-12-06 17:20:44.813549] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.666 [2024-12-06 17:20:44.813555] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.813561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.666 [2024-12-06 17:20:44.813591] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.813647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.666 [2024-12-06 17:20:44.813669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.666 [2024-12-06 17:20:44.813680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.666 [2024-12-06 17:20:44.813696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.666 [2024-12-06 17:20:44.813710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.666 [2024-12-06 17:20:44.813719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.666 [2024-12-06 17:20:44.813729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.666 [2024-12-06 17:20:44.813738] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.666 [2024-12-06 17:20:44.813744] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.666 [2024-12-06 17:20:44.813749] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.666 [2024-12-06 17:20:44.819401] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.666 [2024-12-06 17:20:44.819425] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.666 [2024-12-06 17:20:44.819432] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.819437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.666 [2024-12-06 17:20:44.819468] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.666 [2024-12-06 17:20:44.819524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.666 [2024-12-06 17:20:44.819545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.666 [2024-12-06 17:20:44.819556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.666 [2024-12-06 17:20:44.819571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.666 [2024-12-06 17:20:44.819605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.667 [2024-12-06 17:20:44.819616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.667 [2024-12-06 17:20:44.819627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.667 [2024-12-06 17:20:44.819635] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.667 [2024-12-06 17:20:44.819641] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.667 [2024-12-06 17:20:44.819646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.667 [2024-12-06 17:20:44.823600] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.667 [2024-12-06 17:20:44.823752] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.667 [2024-12-06 17:20:44.823765] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.667 [2024-12-06 17:20:44.823771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.667 [2024-12-06 17:20:44.823810] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.667 [2024-12-06 17:20:44.823876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.667 [2024-12-06 17:20:44.823898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.667 [2024-12-06 17:20:44.823910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.667 [2024-12-06 17:20:44.823927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.667 [2024-12-06 17:20:44.823941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.667 [2024-12-06 17:20:44.823951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.667 [2024-12-06 17:20:44.823961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.667 [2024-12-06 17:20:44.823970] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.667 [2024-12-06 17:20:44.823976] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.667 [2024-12-06 17:20:44.823981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.667 [2024-12-06 17:20:44.829481] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:02.667 [2024-12-06 17:20:44.829507] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:02.667 [2024-12-06 17:20:44.829514] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:02.667 [2024-12-06 17:20:44.829520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:02.667 [2024-12-06 17:20:44.829550] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:02.667 [2024-12-06 17:20:44.829609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.667 [2024-12-06 17:20:44.829631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1128780 with addr=10.0.0.3, port=4420 00:21:02.667 [2024-12-06 17:20:44.829642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1128780 is same with the state(6) to be set 00:21:02.667 [2024-12-06 17:20:44.829658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1128780 (9): Bad file descriptor 00:21:02.667 [2024-12-06 17:20:44.829692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:02.667 [2024-12-06 17:20:44.829703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:02.667 [2024-12-06 17:20:44.829713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:02.667 [2024-12-06 17:20:44.829722] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:02.667 [2024-12-06 17:20:44.829728] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:02.667 [2024-12-06 17:20:44.829733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:02.667 [2024-12-06 17:20:44.833818] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:21:02.667 [2024-12-06 17:20:44.833976] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:21:02.667 [2024-12-06 17:20:44.833989] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:21:02.667 [2024-12-06 17:20:44.833995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:21:02.667 [2024-12-06 17:20:44.834032] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:21:02.667 [2024-12-06 17:20:44.834097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:02.667 [2024-12-06 17:20:44.834119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119d830 with addr=10.0.0.4, port=4420 00:21:02.667 [2024-12-06 17:20:44.834130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119d830 is same with the state(6) to be set 00:21:02.667 [2024-12-06 17:20:44.834147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119d830 (9): Bad file descriptor 00:21:02.667 [2024-12-06 17:20:44.834162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:21:02.667 [2024-12-06 17:20:44.834172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:21:02.667 [2024-12-06 17:20:44.834182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:21:02.667 [2024-12-06 17:20:44.834191] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:21:02.667 [2024-12-06 17:20:44.834197] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:21:02.667 [2024-12-06 17:20:44.834202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:21:02.667 [2024-12-06 17:20:44.835708] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:21:02.667 [2024-12-06 17:20:44.835755] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:02.667 [2024-12-06 17:20:44.835798] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:02.667 [2024-12-06 17:20:44.836688] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:21:02.667 [2024-12-06 17:20:44.836715] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:02.667 [2024-12-06 17:20:44.836735] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:02.667 [2024-12-06 17:20:44.921813] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:02.667 [2024-12-06 17:20:44.922785] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.602 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.860 17:20:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:21:03.860 [2024-12-06 17:20:45.999947] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:04.794 17:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:21:04.794 17:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:04.794 17:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:04.794 17:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.794 17:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.794 17:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:04.794 17:20:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:21:04.794 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:05.053 [2024-12-06 17:20:47.223141] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:21:05.053 2024/12/06 17:20:47 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:05.053 request: 00:21:05.053 { 00:21:05.053 "method": "bdev_nvme_start_mdns_discovery", 00:21:05.053 "params": { 00:21:05.053 "name": "mdns", 00:21:05.053 "svcname": "_nvme-disc._http", 00:21:05.053 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:05.053 } 00:21:05.053 } 00:21:05.053 Got JSON-RPC error response 00:21:05.053 GoRPCClient: error on JSON-RPC call 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:05.053 17:20:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:21:05.620 [2024-12-06 17:20:47.811819] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:21:05.620 [2024-12-06 17:20:47.911833] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:21:05.877 [2024-12-06 17:20:48.011837] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:05.877 [2024-12-06 17:20:48.011884] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:05.877 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:05.877 cookie is 0 00:21:05.877 is_local: 1 00:21:05.877 our_own: 0 00:21:05.877 wide_area: 0 00:21:05.877 multicast: 1 00:21:05.877 cached: 1 00:21:05.877 [2024-12-06 17:20:48.111832] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:05.877 [2024-12-06 17:20:48.112108] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:21:05.877 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:05.877 cookie is 0 00:21:05.877 is_local: 1 00:21:05.877 our_own: 0 00:21:05.877 wide_area: 0 00:21:05.877 multicast: 1 00:21:05.877 cached: 1 00:21:05.877 [2024-12-06 17:20:48.112130] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:21:06.134 [2024-12-06 17:20:48.211831] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:21:06.134 [2024-12-06 17:20:48.212080] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:06.134 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:06.134 cookie is 0 00:21:06.134 is_local: 1 00:21:06.134 our_own: 0 00:21:06.134 wide_area: 0 00:21:06.134 multicast: 1 00:21:06.134 cached: 1 00:21:06.135 [2024-12-06 17:20:48.311832] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:21:06.135 [2024-12-06 17:20:48.312089] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:21:06.135 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:06.135 cookie is 0 00:21:06.135 is_local: 1 00:21:06.135 our_own: 0 00:21:06.135 wide_area: 0 00:21:06.135 multicast: 1 00:21:06.135 cached: 1 00:21:06.135 [2024-12-06 17:20:48.312116] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:21:07.063 [2024-12-06 17:20:49.020006] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:21:07.063 [2024-12-06 17:20:49.020224] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:21:07.063 [2024-12-06 17:20:49.020285] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:21:07.063 [2024-12-06 17:20:49.106142] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:21:07.063 [2024-12-06 17:20:49.164639] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:21:07.063 [2024-12-06 17:20:49.165378] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x11bdb00:1 started. 00:21:07.063 [2024-12-06 17:20:49.166897] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:21:07.063 [2024-12-06 17:20:49.167080] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:21:07.063 [2024-12-06 17:20:49.168766] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x11bdb00 was disconnected and freed. delete nvme_qpair. 00:21:07.063 [2024-12-06 17:20:49.219994] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:21:07.064 [2024-12-06 17:20:49.220238] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:21:07.064 [2024-12-06 17:20:49.220453] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:21:07.064 [2024-12-06 17:20:49.306149] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:21:07.321 [2024-12-06 17:20:49.364963] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:21:07.321 [2024-12-06 17:20:49.365945] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x11b72e0:1 started. 00:21:07.321 [2024-12-06 17:20:49.367633] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:21:07.321 [2024-12-06 17:20:49.367800] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:21:07.321 [2024-12-06 17:20:49.369260] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x11b72e0 was disconnected and freed. delete nvme_qpair. 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.601 [2024-12-06 17:20:52.437199] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:21:10.601 2024/12/06 17:20:52 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:21:10.601 request: 00:21:10.601 { 00:21:10.601 "method": "bdev_nvme_start_mdns_discovery", 00:21:10.601 "params": { 00:21:10.601 "name": "cdc", 00:21:10.601 "svcname": "_nvme-disc._tcp", 00:21:10.601 "hostnqn": "nqn.2021-12.io.spdk:test" 00:21:10.601 } 00:21:10.601 } 00:21:10.601 Got JSON-RPC error response 00:21:10.601 GoRPCClient: error on JSON-RPC call 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:21:10.601 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:10.602 [2024-12-06 17:20:52.611804] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:10.602 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:10.602 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:21:10.602 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:10.602 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.602 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.602 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:10.602 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.602 17:20:52 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:11.590 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:21:11.590 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:21:11.590 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 94743 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 94743 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 94760 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.590 17:20:53 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:21:11.590 Got SIGTERM, quitting. 00:21:11.590 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:21:11.590 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:21:11.590 avahi-daemon 0.8 exiting. 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.849 rmmod nvme_tcp 00:21:11.849 rmmod nvme_fabrics 00:21:11.849 rmmod nvme_keyring 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 94711 ']' 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 94711 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 94711 ']' 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 94711 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94711 00:21:11.849 killing process with pid 94711 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94711' 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 94711 00:21:11.849 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 94711 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:12.108 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:21:12.367 00:21:12.367 real 0m21.663s 00:21:12.367 user 0m42.228s 00:21:12.367 sys 0m1.968s 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:12.367 ************************************ 00:21:12.367 END TEST nvmf_mdns_discovery 00:21:12.367 ************************************ 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.367 ************************************ 00:21:12.367 START TEST nvmf_host_multipath 00:21:12.367 ************************************ 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:12.367 * Looking for test storage... 00:21:12.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:21:12.367 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:12.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.627 --rc genhtml_branch_coverage=1 00:21:12.627 --rc genhtml_function_coverage=1 00:21:12.627 --rc genhtml_legend=1 00:21:12.627 --rc geninfo_all_blocks=1 00:21:12.627 --rc geninfo_unexecuted_blocks=1 00:21:12.627 00:21:12.627 ' 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:12.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.627 --rc genhtml_branch_coverage=1 00:21:12.627 --rc genhtml_function_coverage=1 00:21:12.627 --rc genhtml_legend=1 00:21:12.627 --rc geninfo_all_blocks=1 00:21:12.627 --rc geninfo_unexecuted_blocks=1 00:21:12.627 00:21:12.627 ' 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:12.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.627 --rc genhtml_branch_coverage=1 00:21:12.627 --rc genhtml_function_coverage=1 00:21:12.627 --rc genhtml_legend=1 00:21:12.627 --rc geninfo_all_blocks=1 00:21:12.627 --rc geninfo_unexecuted_blocks=1 00:21:12.627 00:21:12.627 ' 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:12.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.627 --rc genhtml_branch_coverage=1 00:21:12.627 --rc genhtml_function_coverage=1 00:21:12.627 --rc genhtml_legend=1 00:21:12.627 --rc geninfo_all_blocks=1 00:21:12.627 --rc geninfo_unexecuted_blocks=1 00:21:12.627 00:21:12.627 ' 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.627 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.628 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:12.628 Cannot find device "nvmf_init_br" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:12.628 Cannot find device "nvmf_init_br2" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:12.628 Cannot find device "nvmf_tgt_br" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.628 Cannot find device "nvmf_tgt_br2" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:12.628 Cannot find device "nvmf_init_br" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:12.628 Cannot find device "nvmf_init_br2" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:12.628 Cannot find device "nvmf_tgt_br" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:12.628 Cannot find device "nvmf_tgt_br2" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:12.628 Cannot find device "nvmf_br" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:12.628 Cannot find device "nvmf_init_if" 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:12.628 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:12.629 Cannot find device "nvmf_init_if2" 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:12.629 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:12.888 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:12.888 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:12.888 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:12.888 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:12.888 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:12.888 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:12.888 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:12.888 17:20:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:12.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:12.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:21:12.888 00:21:12.888 --- 10.0.0.3 ping statistics --- 00:21:12.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.888 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:12.888 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:12.888 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:21:12.888 00:21:12.888 --- 10.0.0.4 ping statistics --- 00:21:12.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.888 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:21:12.888 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:12.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:12.889 00:21:12.889 --- 10.0.0.1 ping statistics --- 00:21:12.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.889 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:12.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:21:12.889 00:21:12.889 --- 10.0.0.2 ping statistics --- 00:21:12.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.889 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=95413 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 95413 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95413 ']' 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.889 17:20:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:13.150 [2024-12-06 17:20:55.248982] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:21:13.150 [2024-12-06 17:20:55.249375] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.150 [2024-12-06 17:20:55.410624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:13.408 [2024-12-06 17:20:55.457532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.408 [2024-12-06 17:20:55.457807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.408 [2024-12-06 17:20:55.457976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.408 [2024-12-06 17:20:55.458119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.408 [2024-12-06 17:20:55.458165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.408 [2024-12-06 17:20:55.459297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.408 [2024-12-06 17:20:55.459305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.340 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.340 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:14.340 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.340 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.340 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:14.340 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.340 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=95413 00:21:14.340 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:14.599 [2024-12-06 17:20:56.647573] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.599 17:20:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:14.857 Malloc0 00:21:14.857 17:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:15.115 17:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:15.681 17:20:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:15.940 [2024-12-06 17:20:58.072934] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:15.940 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:16.199 [2024-12-06 17:20:58.361099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:16.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95519 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95519 /var/tmp/bdevperf.sock 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95519 ']' 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.199 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:16.765 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.765 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:16.765 17:20:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:17.023 17:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:17.283 Nvme0n1 00:21:17.283 17:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:17.546 Nvme0n1 00:21:17.803 17:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:17.803 17:20:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:18.737 17:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:18.737 17:21:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:18.995 17:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:19.562 17:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:19.562 17:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95597 00:21:19.562 17:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95413 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:19.562 17:21:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:26.119 17:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:26.119 17:21:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.119 Attaching 4 probes... 00:21:26.119 @path[10.0.0.3, 4421]: 15295 00:21:26.119 @path[10.0.0.3, 4421]: 13224 00:21:26.119 @path[10.0.0.3, 4421]: 12299 00:21:26.119 @path[10.0.0.3, 4421]: 13649 00:21:26.119 @path[10.0.0.3, 4421]: 15335 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95597 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:26.119 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:26.377 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:26.635 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:26.635 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95730 00:21:26.635 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95413 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:26.635 17:21:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:33.194 17:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:21:33.194 17:21:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.194 Attaching 4 probes... 00:21:33.194 @path[10.0.0.3, 4420]: 16989 00:21:33.194 @path[10.0.0.3, 4420]: 17545 00:21:33.194 @path[10.0.0.3, 4420]: 17392 00:21:33.194 @path[10.0.0.3, 4420]: 17462 00:21:33.194 @path[10.0.0.3, 4420]: 17071 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95730 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:33.194 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:33.452 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:21:33.452 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95865 00:21:33.452 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95413 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:33.452 17:21:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:40.014 17:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:40.014 17:21:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:40.014 Attaching 4 probes... 00:21:40.014 @path[10.0.0.3, 4421]: 12234 00:21:40.014 @path[10.0.0.3, 4421]: 16141 00:21:40.014 @path[10.0.0.3, 4421]: 15792 00:21:40.014 @path[10.0.0.3, 4421]: 15163 00:21:40.014 @path[10.0.0.3, 4421]: 16127 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95865 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:21:40.014 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:40.273 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:40.532 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:21:40.532 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95997 00:21:40.532 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:40.532 17:21:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95413 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.091 Attaching 4 probes... 00:21:47.091 00:21:47.091 00:21:47.091 00:21:47.091 00:21:47.091 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95997 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:47.091 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:21:47.092 17:21:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:47.092 17:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:47.350 17:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:21:47.350 17:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96129 00:21:47.350 17:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95413 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:47.350 17:21:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.920 Attaching 4 probes... 00:21:53.920 @path[10.0.0.3, 4421]: 15917 00:21:53.920 @path[10.0.0.3, 4421]: 15800 00:21:53.920 @path[10.0.0.3, 4421]: 15991 00:21:53.920 @path[10.0.0.3, 4421]: 15877 00:21:53.920 @path[10.0.0.3, 4421]: 15961 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:53.920 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96129 00:21:53.921 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:53.921 17:21:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:53.921 [2024-12-06 17:21:36.181468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 [2024-12-06 17:21:36.181884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fae90 is same with the state(6) to be set 00:21:53.921 17:21:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:21:55.327 17:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:21:55.327 17:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96265 00:21:55.327 17:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95413 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:55.327 17:21:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:01.897 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:01.897 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:01.897 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:01.897 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.897 Attaching 4 probes... 00:22:01.897 @path[10.0.0.3, 4420]: 16128 00:22:01.897 @path[10.0.0.3, 4420]: 16266 00:22:01.897 @path[10.0.0.3, 4420]: 16269 00:22:01.897 @path[10.0.0.3, 4420]: 14461 00:22:01.898 @path[10.0.0.3, 4420]: 16223 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96265 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:01.898 [2024-12-06 17:21:43.920622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:01.898 17:21:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:02.155 17:21:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:08.759 17:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:08.759 17:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96460 00:22:08.759 17:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 95413 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:08.759 17:21:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:14.027 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:14.027 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:14.674 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:14.674 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:14.674 Attaching 4 probes... 00:22:14.674 @path[10.0.0.3, 4421]: 15167 00:22:14.674 @path[10.0.0.3, 4421]: 15833 00:22:14.675 @path[10.0.0.3, 4421]: 15893 00:22:14.675 @path[10.0.0.3, 4421]: 15717 00:22:14.675 @path[10.0.0.3, 4421]: 15899 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96460 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95519 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95519 ']' 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95519 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95519 00:22:14.675 killing process with pid 95519 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95519' 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95519 00:22:14.675 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95519 00:22:14.675 { 00:22:14.675 "results": [ 00:22:14.675 { 00:22:14.675 "job": "Nvme0n1", 00:22:14.675 "core_mask": "0x4", 00:22:14.675 "workload": "verify", 00:22:14.675 "status": "terminated", 00:22:14.675 "verify_range": { 00:22:14.675 "start": 0, 00:22:14.675 "length": 16384 00:22:14.675 }, 00:22:14.675 "queue_depth": 128, 00:22:14.675 "io_size": 4096, 00:22:14.675 "runtime": 56.749509, 00:22:14.675 "iops": 6815.6713038697835, 00:22:14.675 "mibps": 26.623716030741342, 00:22:14.675 "io_failed": 0, 00:22:14.675 "io_timeout": 0, 00:22:14.675 "avg_latency_us": 18745.139549753374, 00:22:14.675 "min_latency_us": 1109.6436363636365, 00:22:14.675 "max_latency_us": 7046430.72 00:22:14.675 } 00:22:14.675 ], 00:22:14.675 "core_count": 1 00:22:14.675 } 00:22:14.953 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95519 00:22:14.953 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:14.954 [2024-12-06 17:20:58.429994] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:22:14.954 [2024-12-06 17:20:58.430104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95519 ] 00:22:14.954 [2024-12-06 17:20:58.572918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.954 [2024-12-06 17:20:58.622785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.954 Running I/O for 90 seconds... 00:22:14.954 8272.00 IOPS, 32.31 MiB/s [2024-12-06T17:21:57.252Z] 8500.50 IOPS, 33.21 MiB/s [2024-12-06T17:21:57.252Z] 8388.00 IOPS, 32.77 MiB/s [2024-12-06T17:21:57.252Z] 7937.75 IOPS, 31.01 MiB/s [2024-12-06T17:21:57.252Z] 7601.20 IOPS, 29.69 MiB/s [2024-12-06T17:21:57.252Z] 7424.50 IOPS, 29.00 MiB/s [2024-12-06T17:21:57.252Z] 7445.00 IOPS, 29.08 MiB/s [2024-12-06T17:21:57.252Z] 7495.25 IOPS, 29.28 MiB/s [2024-12-06T17:21:57.252Z] [2024-12-06 17:21:08.751371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.751971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.751994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.954 [2024-12-06 17:21:08.752382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.954 [2024-12-06 17:21:08.752862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.954 [2024-12-06 17:21:08.752884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.752908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.752932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.752949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.752971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.752988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.753011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.753027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.754957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.754973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.755533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.755560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.756122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.955 [2024-12-06 17:21:08.756153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.955 [2024-12-06 17:21:08.756182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.756966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.756988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.956 [2024-12-06 17:21:08.757744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.956 [2024-12-06 17:21:08.757766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.757782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.757811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.757827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.757850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.757867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.757889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.757905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.757927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.757943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.757965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.757981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:08.758419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:08.758435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.957 7554.67 IOPS, 29.51 MiB/s [2024-12-06T17:21:57.255Z] 7644.90 IOPS, 29.86 MiB/s [2024-12-06T17:21:57.255Z] 7747.45 IOPS, 30.26 MiB/s [2024-12-06T17:21:57.255Z] 7832.42 IOPS, 30.60 MiB/s [2024-12-06T17:21:57.255Z] 7900.23 IOPS, 30.86 MiB/s [2024-12-06T17:21:57.255Z] 7945.79 IOPS, 31.04 MiB/s [2024-12-06T17:21:57.255Z] 7992.60 IOPS, 31.22 MiB/s [2024-12-06T17:21:57.255Z] [2024-12-06 17:21:15.378127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.378979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.378998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.957 [2024-12-06 17:21:15.379447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.957 [2024-12-06 17:21:15.379469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.379486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.379508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.379524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.379546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.379563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.379829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.379857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.379886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.379903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.379925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.379941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.379963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.379979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.380780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.380796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.381203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.381232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.381260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.381293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.381318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.381334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.381358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.381374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.381397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.381413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.381447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.958 [2024-12-06 17:21:15.381466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.958 [2024-12-06 17:21:15.381489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.959 [2024-12-06 17:21:15.381505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.381965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.381981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.959 [2024-12-06 17:21:15.382144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.382592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.382977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.959 [2024-12-06 17:21:15.383437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.959 [2024-12-06 17:21:15.383460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.383832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.383872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.383910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.383949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.383971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.383987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.384032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.384070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.384108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.384746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.960 [2024-12-06 17:21:15.384763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.385238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.385294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.385325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.385343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.385365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.385381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.385404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.385420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.385442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.385458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.385480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.960 [2024-12-06 17:21:15.385500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.960 [2024-12-06 17:21:15.385523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.385976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.385998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.386974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.386989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.387011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.387027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.387049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.387065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.387087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.961 [2024-12-06 17:21:15.387103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.961 [2024-12-06 17:21:15.387125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.387597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.387635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.387673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.387711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.387749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.387797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.387837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.387859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.387874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.962 [2024-12-06 17:21:15.404935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.404965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.404986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.405041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.405064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.405093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.405114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.405143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.405163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.405192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.405213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.405242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.405262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.962 [2024-12-06 17:21:15.405311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.962 [2024-12-06 17:21:15.405336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.405365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.405386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.405415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.405435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.405466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.405486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.407962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.407995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.963 [2024-12-06 17:21:15.408399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.963 [2024-12-06 17:21:15.408450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.963 [2024-12-06 17:21:15.408499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.963 [2024-12-06 17:21:15.408549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.963 [2024-12-06 17:21:15.408608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.963 [2024-12-06 17:21:15.408672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.963 [2024-12-06 17:21:15.408725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.408951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.408972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.409000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.409021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.963 [2024-12-06 17:21:15.409051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.963 [2024-12-06 17:21:15.409071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.964 [2024-12-06 17:21:15.409559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.409609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.409658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.409707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.409756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.409806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.409856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.409906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.409968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.409998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.410018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.410047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.410067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.410096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.410116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.410145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.410166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.410195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.410215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.410244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.410280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.410315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.410336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.964 [2024-12-06 17:21:15.411732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.964 [2024-12-06 17:21:15.411754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.411769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.411791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.411806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.411829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.411853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.411876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.411892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.411914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.411929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.411951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.411967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.411989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.965 [2024-12-06 17:21:15.412856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.412894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.412932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.412969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.412990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.413006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.413028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.413043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.413066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.413081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.413103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.413118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.413140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.413156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.413178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.413193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.413215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.413231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.413252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.965 [2024-12-06 17:21:15.413280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.965 [2024-12-06 17:21:15.413312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-12-06 17:21:15.413482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.413814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.413831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.414967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.414988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.966 [2024-12-06 17:21:15.415522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-12-06 17:21:15.415576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.966 [2024-12-06 17:21:15.415614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.966 [2024-12-06 17:21:15.415636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.415652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.415674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.415689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.415711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.415727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.415748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.415764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.415786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.415802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.415836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.415862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.415887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.415903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.415926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.415943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.415978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.967 [2024-12-06 17:21:15.416517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.416966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.416981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.417003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.417018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.967 [2024-12-06 17:21:15.417041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.967 [2024-12-06 17:21:15.417064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.417792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.417822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.417851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.417870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.417893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.417909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.417931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.417946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.417968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.417984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.418965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.418987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.419327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.968 [2024-12-06 17:21:15.419349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.968 [2024-12-06 17:21:15.430402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.969 [2024-12-06 17:21:15.430506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.969 [2024-12-06 17:21:15.430548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.969 [2024-12-06 17:21:15.430586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.969 [2024-12-06 17:21:15.430624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.430681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.430721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.430760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.430798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.430837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.430893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.430935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.430973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.430995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.969 [2024-12-06 17:21:15.431297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.431599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.431615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.969 [2024-12-06 17:21:15.432980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.969 [2024-12-06 17:21:15.432996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.433564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.433601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.433639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.433677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.433714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.433752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.433796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.433971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.433986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.970 [2024-12-06 17:21:15.434427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.434464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.434502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.970 [2024-12-06 17:21:15.434524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.970 [2024-12-06 17:21:15.434539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.434955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.434985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.435006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.435862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.435898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.435936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.435959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.435988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.436979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.436999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.437028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.437049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.437077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.437098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.971 [2024-12-06 17:21:15.437126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.971 [2024-12-06 17:21:15.437147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.437970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.437999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.438020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.438069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.438118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.438918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.972 [2024-12-06 17:21:15.438967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.438995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.439016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.439045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.439065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.439094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.439114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.972 [2024-12-06 17:21:15.439142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.972 [2024-12-06 17:21:15.439162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.439191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.439211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.439240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.439260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.439307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.439329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.440968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.440989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.441673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.973 [2024-12-06 17:21:15.441723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.973 [2024-12-06 17:21:15.441784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.973 [2024-12-06 17:21:15.441834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.973 [2024-12-06 17:21:15.441884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.973 [2024-12-06 17:21:15.441933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.441961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.973 [2024-12-06 17:21:15.441982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.442011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.973 [2024-12-06 17:21:15.442031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.442060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.442080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.442109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.442129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.442158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.973 [2024-12-06 17:21:15.442179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.973 [2024-12-06 17:21:15.442207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.974 [2024-12-06 17:21:15.442870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.442919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.442948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.442968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.443474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.443494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.444966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.444994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.445015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.445044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.445064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.445093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.974 [2024-12-06 17:21:15.445113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.974 [2024-12-06 17:21:15.445143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.445958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.445988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.975 [2024-12-06 17:21:15.446749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.975 [2024-12-06 17:21:15.446803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.975 [2024-12-06 17:21:15.446853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.975 [2024-12-06 17:21:15.446907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.975 [2024-12-06 17:21:15.446945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.446966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.975 [2024-12-06 17:21:15.446982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.447004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.975 [2024-12-06 17:21:15.447019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.447041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.975 [2024-12-06 17:21:15.447065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.447089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.975 [2024-12-06 17:21:15.447105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.975 [2024-12-06 17:21:15.447127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.976 [2024-12-06 17:21:15.447419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.447640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.447656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.448976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.448992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.449014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.449029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.449051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.449066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.449088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.449104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.449135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.449151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.449172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.449188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.449210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.449225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:14.976 [2024-12-06 17:21:15.449247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.976 [2024-12-06 17:21:15.449282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.449318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.449335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.449358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.460515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.460625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.460656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.460690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.460713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.460744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.460766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.460797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.460819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.460850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.460872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.460902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.460924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.460954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.460976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.461028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.461080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.461136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.461967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.461989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.977 [2024-12-06 17:21:15.462041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:14.977 [2024-12-06 17:21:15.462634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.977 [2024-12-06 17:21:15.462675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:65120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:65128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:65144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:65168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:65184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.463952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.463973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:65208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:65240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.464967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.464989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.978 [2024-12-06 17:21:15.465524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.978 [2024-12-06 17:21:15.465546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.465591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.979 [2024-12-06 17:21:15.465613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.465649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.979 [2024-12-06 17:21:15.465670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.465706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.979 [2024-12-06 17:21:15.465728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.465764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.979 [2024-12-06 17:21:15.465787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.465823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.979 [2024-12-06 17:21:15.465844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.465880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.979 [2024-12-06 17:21:15.465902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.465937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.979 [2024-12-06 17:21:15.465959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.465994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.466903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.979 [2024-12-06 17:21:15.466960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.466995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.467017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.467053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.467075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.467111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.467132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.467168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.467189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.467226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.467248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:15.467548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:15.467582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.979 7672.06 IOPS, 29.97 MiB/s [2024-12-06T17:21:57.277Z] 7501.12 IOPS, 29.30 MiB/s [2024-12-06T17:21:57.277Z] 7537.00 IOPS, 29.44 MiB/s [2024-12-06T17:21:57.277Z] 7551.26 IOPS, 29.50 MiB/s [2024-12-06T17:21:57.277Z] 7570.90 IOPS, 29.57 MiB/s [2024-12-06T17:21:57.277Z] 7579.33 IOPS, 29.61 MiB/s [2024-12-06T17:21:57.277Z] 7605.82 IOPS, 29.71 MiB/s [2024-12-06T17:21:57.277Z] [2024-12-06 17:21:22.645085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.645160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:22.645223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.645247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:22.645284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.645332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:22.645367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.645384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:22.645406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.645423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:22.645445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.645461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:22.645482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.645499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:22.645521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.645537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:14.979 [2024-12-06 17:21:22.648114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.979 [2024-12-06 17:21:22.648156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.648962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.648996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.649013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.649057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.980 [2024-12-06 17:21:22.649199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:68528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.980 [2024-12-06 17:21:22.649809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:14.980 [2024-12-06 17:21:22.649836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.649852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.649879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.649895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.649923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.649943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.649971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.649987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.650972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.650989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.651015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.651031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.651058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.651074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.651101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.651117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:22.651145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.981 [2024-12-06 17:21:22.651161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:14.981 7494.61 IOPS, 29.28 MiB/s [2024-12-06T17:21:57.279Z] 7182.33 IOPS, 28.06 MiB/s [2024-12-06T17:21:57.279Z] 6895.04 IOPS, 26.93 MiB/s [2024-12-06T17:21:57.279Z] 6629.85 IOPS, 25.90 MiB/s [2024-12-06T17:21:57.279Z] 6384.30 IOPS, 24.94 MiB/s [2024-12-06T17:21:57.279Z] 6156.29 IOPS, 24.05 MiB/s [2024-12-06T17:21:57.279Z] 5944.00 IOPS, 23.22 MiB/s [2024-12-06T17:21:57.279Z] 5854.23 IOPS, 22.87 MiB/s [2024-12-06T17:21:57.279Z] 5922.26 IOPS, 23.13 MiB/s [2024-12-06T17:21:57.279Z] 5984.56 IOPS, 23.38 MiB/s [2024-12-06T17:21:57.279Z] 6044.42 IOPS, 23.61 MiB/s [2024-12-06T17:21:57.279Z] 6099.97 IOPS, 23.83 MiB/s [2024-12-06T17:21:57.279Z] 6155.69 IOPS, 24.05 MiB/s [2024-12-06T17:21:57.279Z] 6207.19 IOPS, 24.25 MiB/s [2024-12-06T17:21:57.279Z] [2024-12-06 17:21:36.182679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.981 [2024-12-06 17:21:36.182738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:36.182766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.981 [2024-12-06 17:21:36.182782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:36.182798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.981 [2024-12-06 17:21:36.182838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:36.182857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.981 [2024-12-06 17:21:36.182871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:36.182886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.981 [2024-12-06 17:21:36.182899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:36.182914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.981 [2024-12-06 17:21:36.182928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:36.182944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.981 [2024-12-06 17:21:36.182957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:36.182973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.981 [2024-12-06 17:21:36.182986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.981 [2024-12-06 17:21:36.183002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:89448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:89456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:89480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:89488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:89504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:89536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:89552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:89568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:89600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:89608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:89640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:14.982 [2024-12-06 17:21:36.183785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.183815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:89848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.183844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.183877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:89864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.183906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.183935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:89880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.183964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.183980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:89888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.184000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.184016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:89896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.184030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.184046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:89904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.184059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.184075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:89912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.184088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.184104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.184118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.184133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:89928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.184147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.184162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.184175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.184192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.982 [2024-12-06 17:21:36.184206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.982 [2024-12-06 17:21:36.184221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:89952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:89968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:89976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:89984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:89992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:90024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:90080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.184972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.184987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.983 [2024-12-06 17:21:36.185397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.983 [2024-12-06 17:21:36.185411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:14.984 [2024-12-06 17:21:36.185899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.185954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90392 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.185976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.185995] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90400 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89656 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89664 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89672 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89680 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89688 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89696 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89704 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89712 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89720 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89728 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89736 len:8 PRP1 0x0 PRP2 0x0 00:22:14.984 [2024-12-06 17:21:36.186578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.984 [2024-12-06 17:21:36.186593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.984 [2024-12-06 17:21:36.186603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.984 [2024-12-06 17:21:36.186613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89744 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.186626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.186640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.186659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.186672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89752 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.186685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.186699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.186709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.186719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89760 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.186732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.186753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.186763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.186773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89768 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.186786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.186800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.186810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.186820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89776 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.186833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.186847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.186856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.186867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89784 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.186880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.186894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.186903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.186914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89792 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.186928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.186943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.186952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.186963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89800 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.186976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.186993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.187003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.187013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89808 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.187026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.187040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.187050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.187060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89816 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.187073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.187087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.187097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.187107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89824 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.187126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.187140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:14.985 [2024-12-06 17:21:36.187150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:14.985 [2024-12-06 17:21:36.187160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89832 len:8 PRP1 0x0 PRP2 0x0 00:22:14.985 [2024-12-06 17:21:36.187182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.187345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.985 [2024-12-06 17:21:36.187373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.187390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.985 [2024-12-06 17:21:36.187404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.187417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.985 [2024-12-06 17:21:36.187431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.187445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:14.985 [2024-12-06 17:21:36.187458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:14.985 [2024-12-06 17:21:36.187472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dd90 is same with the state(6) to be set 00:22:14.985 [2024-12-06 17:21:36.188842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:14.985 [2024-12-06 17:21:36.188884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138dd90 (9): Bad file descriptor 00:22:14.985 [2024-12-06 17:21:36.189016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:14.985 [2024-12-06 17:21:36.189048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x138dd90 with addr=10.0.0.3, port=4421 00:22:14.985 [2024-12-06 17:21:36.189065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x138dd90 is same with the state(6) to be set 00:22:14.985 [2024-12-06 17:21:36.189090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138dd90 (9): Bad file descriptor 00:22:14.985 [2024-12-06 17:21:36.189113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:14.985 [2024-12-06 17:21:36.189129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:14.985 [2024-12-06 17:21:36.189145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:14.985 [2024-12-06 17:21:36.189158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:14.985 [2024-12-06 17:21:36.189173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:14.985 6256.14 IOPS, 24.44 MiB/s [2024-12-06T17:21:57.283Z] 6292.05 IOPS, 24.58 MiB/s [2024-12-06T17:21:57.283Z] 6341.05 IOPS, 24.77 MiB/s [2024-12-06T17:21:57.283Z] 6384.88 IOPS, 24.94 MiB/s [2024-12-06T17:21:57.283Z] 6424.37 IOPS, 25.10 MiB/s [2024-12-06T17:21:57.283Z] 6448.40 IOPS, 25.19 MiB/s [2024-12-06T17:21:57.283Z] 6487.16 IOPS, 25.34 MiB/s [2024-12-06T17:21:57.283Z] 6507.55 IOPS, 25.42 MiB/s [2024-12-06T17:21:57.283Z] 6546.31 IOPS, 25.57 MiB/s [2024-12-06T17:21:57.283Z] 6582.46 IOPS, 25.71 MiB/s [2024-12-06T17:21:57.283Z] [2024-12-06 17:21:46.278440] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:14.985 6610.62 IOPS, 25.82 MiB/s [2024-12-06T17:21:57.283Z] 6639.29 IOPS, 25.93 MiB/s [2024-12-06T17:21:57.283Z] 6659.20 IOPS, 26.01 MiB/s [2024-12-06T17:21:57.283Z] 6681.32 IOPS, 26.10 MiB/s [2024-12-06T17:21:57.283Z] 6700.35 IOPS, 26.17 MiB/s [2024-12-06T17:21:57.283Z] 6725.75 IOPS, 26.27 MiB/s [2024-12-06T17:21:57.283Z] 6746.53 IOPS, 26.35 MiB/s [2024-12-06T17:21:57.283Z] 6769.50 IOPS, 26.44 MiB/s [2024-12-06T17:21:57.283Z] 6790.31 IOPS, 26.52 MiB/s [2024-12-06T17:21:57.283Z] 6811.02 IOPS, 26.61 MiB/s [2024-12-06T17:21:57.283Z] Received shutdown signal, test time was about 56.750596 seconds 00:22:14.985 00:22:14.985 Latency(us) 00:22:14.985 [2024-12-06T17:21:57.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.985 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:14.985 Verification LBA range: start 0x0 length 0x4000 00:22:14.985 Nvme0n1 : 56.75 6815.67 26.62 0.00 0.00 18745.14 1109.64 7046430.72 00:22:14.985 [2024-12-06T17:21:57.283Z] =================================================================================================================== 00:22:14.985 [2024-12-06T17:21:57.284Z] Total : 6815.67 26.62 0.00 0.00 18745.14 1109.64 7046430.72 00:22:14.986 17:21:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:15.244 rmmod nvme_tcp 00:22:15.244 rmmod nvme_fabrics 00:22:15.244 rmmod nvme_keyring 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 95413 ']' 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 95413 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95413 ']' 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95413 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95413 00:22:15.244 killing process with pid 95413 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95413' 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95413 00:22:15.244 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95413 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:15.502 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:15.760 00:22:15.760 real 1m3.302s 00:22:15.760 user 2m59.829s 00:22:15.760 sys 0m14.094s 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:15.760 ************************************ 00:22:15.760 END TEST nvmf_host_multipath 00:22:15.760 ************************************ 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:15.760 ************************************ 00:22:15.760 START TEST nvmf_timeout 00:22:15.760 ************************************ 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:15.760 * Looking for test storage... 00:22:15.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:22:15.760 17:21:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.019 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.020 --rc genhtml_branch_coverage=1 00:22:16.020 --rc genhtml_function_coverage=1 00:22:16.020 --rc genhtml_legend=1 00:22:16.020 --rc geninfo_all_blocks=1 00:22:16.020 --rc geninfo_unexecuted_blocks=1 00:22:16.020 00:22:16.020 ' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.020 --rc genhtml_branch_coverage=1 00:22:16.020 --rc genhtml_function_coverage=1 00:22:16.020 --rc genhtml_legend=1 00:22:16.020 --rc geninfo_all_blocks=1 00:22:16.020 --rc geninfo_unexecuted_blocks=1 00:22:16.020 00:22:16.020 ' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.020 --rc genhtml_branch_coverage=1 00:22:16.020 --rc genhtml_function_coverage=1 00:22:16.020 --rc genhtml_legend=1 00:22:16.020 --rc geninfo_all_blocks=1 00:22:16.020 --rc geninfo_unexecuted_blocks=1 00:22:16.020 00:22:16.020 ' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.020 --rc genhtml_branch_coverage=1 00:22:16.020 --rc genhtml_function_coverage=1 00:22:16.020 --rc genhtml_legend=1 00:22:16.020 --rc geninfo_all_blocks=1 00:22:16.020 --rc geninfo_unexecuted_blocks=1 00:22:16.020 00:22:16.020 ' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.020 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:16.020 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:16.021 Cannot find device "nvmf_init_br" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:16.021 Cannot find device "nvmf_init_br2" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:16.021 Cannot find device "nvmf_tgt_br" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.021 Cannot find device "nvmf_tgt_br2" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:16.021 Cannot find device "nvmf_init_br" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:16.021 Cannot find device "nvmf_init_br2" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:16.021 Cannot find device "nvmf_tgt_br" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:16.021 Cannot find device "nvmf_tgt_br2" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:16.021 Cannot find device "nvmf_br" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:16.021 Cannot find device "nvmf_init_if" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:16.021 Cannot find device "nvmf_init_if2" 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.021 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:16.021 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:16.280 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:16.280 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:22:16.280 00:22:16.280 --- 10.0.0.3 ping statistics --- 00:22:16.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.280 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:16.280 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:16.280 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:22:16.280 00:22:16.280 --- 10.0.0.4 ping statistics --- 00:22:16.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.280 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:16.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:22:16.280 00:22:16.280 --- 10.0.0.1 ping statistics --- 00:22:16.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.280 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:16.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:22:16.280 00:22:16.280 --- 10.0.0.2 ping statistics --- 00:22:16.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.280 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:16.280 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=96830 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 96830 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96830 ']' 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:16.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:16.281 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:16.540 [2024-12-06 17:21:58.578357] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:22:16.540 [2024-12-06 17:21:58.578455] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.540 [2024-12-06 17:21:58.728159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:16.540 [2024-12-06 17:21:58.765281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.540 [2024-12-06 17:21:58.765354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.540 [2024-12-06 17:21:58.765368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.540 [2024-12-06 17:21:58.765395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.540 [2024-12-06 17:21:58.765407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.540 [2024-12-06 17:21:58.766331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.540 [2024-12-06 17:21:58.766342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.798 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:16.798 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:16.798 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:16.798 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.798 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:16.798 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.798 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:16.798 17:21:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:17.056 [2024-12-06 17:21:59.152741] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.056 17:21:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:17.314 Malloc0 00:22:17.314 17:21:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.573 17:21:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.831 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:18.090 [2024-12-06 17:22:00.324579] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:18.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96912 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96912 /var/tmp/bdevperf.sock 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96912 ']' 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.090 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:18.349 [2024-12-06 17:22:00.406745] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:22:18.349 [2024-12-06 17:22:00.406852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96912 ] 00:22:18.349 [2024-12-06 17:22:00.552308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.349 [2024-12-06 17:22:00.585444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.607 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.607 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:18.607 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:18.866 17:22:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:19.124 NVMe0n1 00:22:19.124 17:22:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96942 00:22:19.124 17:22:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:19.124 17:22:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:19.382 Running I/O for 10 seconds... 00:22:20.317 17:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:20.317 8593.00 IOPS, 33.57 MiB/s [2024-12-06T17:22:02.615Z] [2024-12-06 17:22:02.578710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.317 [2024-12-06 17:22:02.578883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.578962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c78120 is same with the state(6) to be set 00:22:20.318 [2024-12-06 17:22:02.579786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.579839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.579863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.579875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.579888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.579898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.579910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.579920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.579931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.579941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.579954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.579964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.579975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.579985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.579997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.318 [2024-12-06 17:22:02.580410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.318 [2024-12-06 17:22:02.580421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.319 [2024-12-06 17:22:02.580431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.319 [2024-12-06 17:22:02.580452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.319 [2024-12-06 17:22:02.580473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.319 [2024-12-06 17:22:02.580494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.319 [2024-12-06 17:22:02.580517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.319 [2024-12-06 17:22:02.580858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.580988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.580999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.581009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.581020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.581030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.581041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.581051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.581063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.319 [2024-12-06 17:22:02.581073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.319 [2024-12-06 17:22:02.581085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.320 [2024-12-06 17:22:02.581800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.320 [2024-12-06 17:22:02.581811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.581833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.581854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.581877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.581898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.581919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.581940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.581961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.581982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.581992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.582013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.582034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.582055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.582077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.582097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.321 [2024-12-06 17:22:02.582118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.321 [2024-12-06 17:22:02.582478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.321 [2024-12-06 17:22:02.582490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.322 [2024-12-06 17:22:02.582500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.582511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.322 [2024-12-06 17:22:02.582521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.582532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.322 [2024-12-06 17:22:02.582541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.582553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.322 [2024-12-06 17:22:02.582565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.582576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.322 [2024-12-06 17:22:02.582586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.582597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.322 [2024-12-06 17:22:02.582607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.582618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.322 [2024-12-06 17:22:02.582628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.582639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:20.322 [2024-12-06 17:22:02.582649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.582699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:20.322 [2024-12-06 17:22:02.582711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:20.322 [2024-12-06 17:22:02.582720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77176 len:8 PRP1 0x0 PRP2 0x0 00:22:20.322 [2024-12-06 17:22:02.582730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.322 [2024-12-06 17:22:02.583033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:20.322 [2024-12-06 17:22:02.583126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e63f30 (9): Bad file descriptor 00:22:20.322 [2024-12-06 17:22:02.583249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:20.322 [2024-12-06 17:22:02.583286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e63f30 with addr=10.0.0.3, port=4420 00:22:20.322 [2024-12-06 17:22:02.583300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e63f30 is same with the state(6) to be set 00:22:20.322 [2024-12-06 17:22:02.583320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e63f30 (9): Bad file descriptor 00:22:20.322 [2024-12-06 17:22:02.583337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:20.322 [2024-12-06 17:22:02.583347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:20.322 [2024-12-06 17:22:02.583359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:20.322 [2024-12-06 17:22:02.583370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:20.322 [2024-12-06 17:22:02.583380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:20.322 17:22:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:22.187 4760.00 IOPS, 18.59 MiB/s [2024-12-06T17:22:04.743Z] 3173.33 IOPS, 12.40 MiB/s [2024-12-06T17:22:04.743Z] [2024-12-06 17:22:04.583635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:22.445 [2024-12-06 17:22:04.583721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e63f30 with addr=10.0.0.3, port=4420 00:22:22.445 [2024-12-06 17:22:04.583740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e63f30 is same with the state(6) to be set 00:22:22.445 [2024-12-06 17:22:04.583771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e63f30 (9): Bad file descriptor 00:22:22.445 [2024-12-06 17:22:04.583810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:22.445 [2024-12-06 17:22:04.583824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:22.445 [2024-12-06 17:22:04.583836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:22.445 [2024-12-06 17:22:04.583848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:22.445 [2024-12-06 17:22:04.583861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:22.445 17:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:22.445 17:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:22.445 17:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:22.703 17:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:22.703 17:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:22.703 17:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:22.703 17:22:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:23.266 17:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:23.266 17:22:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:24.198 2380.00 IOPS, 9.30 MiB/s [2024-12-06T17:22:06.754Z] 1904.00 IOPS, 7.44 MiB/s [2024-12-06T17:22:06.754Z] [2024-12-06 17:22:06.584042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:24.456 [2024-12-06 17:22:06.584107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e63f30 with addr=10.0.0.3, port=4420 00:22:24.456 [2024-12-06 17:22:06.584126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e63f30 is same with the state(6) to be set 00:22:24.456 [2024-12-06 17:22:06.584153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e63f30 (9): Bad file descriptor 00:22:24.456 [2024-12-06 17:22:06.584174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:24.456 [2024-12-06 17:22:06.584186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:24.456 [2024-12-06 17:22:06.584198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:24.456 [2024-12-06 17:22:06.584209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:24.456 [2024-12-06 17:22:06.584221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:26.325 1586.67 IOPS, 6.20 MiB/s [2024-12-06T17:22:08.623Z] 1360.00 IOPS, 5.31 MiB/s [2024-12-06T17:22:08.623Z] [2024-12-06 17:22:08.584403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:26.325 [2024-12-06 17:22:08.584482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:26.325 [2024-12-06 17:22:08.584496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:26.325 [2024-12-06 17:22:08.584508] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:22:26.325 [2024-12-06 17:22:08.584520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:27.516 1190.00 IOPS, 4.65 MiB/s 00:22:27.516 Latency(us) 00:22:27.516 [2024-12-06T17:22:09.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.516 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.516 Verification LBA range: start 0x0 length 0x4000 00:22:27.516 NVMe0n1 : 8.11 1173.60 4.58 15.78 0.00 107422.63 2487.39 7015926.69 00:22:27.516 [2024-12-06T17:22:09.814Z] =================================================================================================================== 00:22:27.516 [2024-12-06T17:22:09.814Z] Total : 1173.60 4.58 15.78 0.00 107422.63 2487.39 7015926.69 00:22:27.516 { 00:22:27.516 "results": [ 00:22:27.516 { 00:22:27.516 "job": "NVMe0n1", 00:22:27.516 "core_mask": "0x4", 00:22:27.516 "workload": "verify", 00:22:27.516 "status": "finished", 00:22:27.516 "verify_range": { 00:22:27.516 "start": 0, 00:22:27.516 "length": 16384 00:22:27.516 }, 00:22:27.516 "queue_depth": 128, 00:22:27.516 "io_size": 4096, 00:22:27.516 "runtime": 8.111771, 00:22:27.516 "iops": 1173.6031502861706, 00:22:27.516 "mibps": 4.584387305805354, 00:22:27.516 "io_failed": 128, 00:22:27.516 "io_timeout": 0, 00:22:27.516 "avg_latency_us": 107422.6304598221, 00:22:27.516 "min_latency_us": 2487.389090909091, 00:22:27.516 "max_latency_us": 7015926.69090909 00:22:27.516 } 00:22:27.516 ], 00:22:27.516 "core_count": 1 00:22:27.516 } 00:22:28.081 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:28.081 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.081 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:28.337 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:28.337 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:28.337 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:28.337 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:28.594 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:28.594 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96942 00:22:28.594 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96912 00:22:28.594 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96912 ']' 00:22:28.594 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96912 00:22:28.594 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:28.594 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:28.594 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96912 00:22:28.854 killing process with pid 96912 00:22:28.854 Received shutdown signal, test time was about 9.441681 seconds 00:22:28.854 00:22:28.854 Latency(us) 00:22:28.854 [2024-12-06T17:22:11.152Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.854 [2024-12-06T17:22:11.152Z] =================================================================================================================== 00:22:28.854 [2024-12-06T17:22:11.152Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.854 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:28.854 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:28.854 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96912' 00:22:28.854 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96912 00:22:28.854 17:22:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96912 00:22:28.854 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:29.117 [2024-12-06 17:22:11.369179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:29.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97101 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97101 /var/tmp/bdevperf.sock 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97101 ']' 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:29.117 17:22:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:29.375 [2024-12-06 17:22:11.462821] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:22:29.375 [2024-12-06 17:22:11.462956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97101 ] 00:22:29.375 [2024-12-06 17:22:11.614169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.375 [2024-12-06 17:22:11.664156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.308 17:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.308 17:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:30.308 17:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:30.564 17:22:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:30.821 NVMe0n1 00:22:30.821 17:22:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97153 00:22:30.821 17:22:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.821 17:22:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:31.078 Running I/O for 10 seconds... 00:22:32.010 17:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:32.294 8116.00 IOPS, 31.70 MiB/s [2024-12-06T17:22:14.592Z] [2024-12-06 17:22:14.553772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.554609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd06b0 is same with the state(6) to be set 00:22:32.294 [2024-12-06 17:22:14.555157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.294 [2024-12-06 17:22:14.555196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.294 [2024-12-06 17:22:14.555224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.294 [2024-12-06 17:22:14.555239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.294 [2024-12-06 17:22:14.555255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.294 [2024-12-06 17:22:14.555288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.294 [2024-12-06 17:22:14.555306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.294 [2024-12-06 17:22:14.555318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.294 [2024-12-06 17:22:14.555334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.294 [2024-12-06 17:22:14.555350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.294 [2024-12-06 17:22:14.555370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.294 [2024-12-06 17:22:14.555386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.294 [2024-12-06 17:22:14.555405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.295 [2024-12-06 17:22:14.555941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.555961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.555982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.555993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.295 [2024-12-06 17:22:14.556298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.295 [2024-12-06 17:22:14.556307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.556410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.556431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.556451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.556472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.556492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.556513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.556533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.556553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.556979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.556991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.557001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.557012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.557021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.557032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.557042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.557052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.557061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.557073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.557082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.557093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.557102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.557113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.296 [2024-12-06 17:22:14.557123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.296 [2024-12-06 17:22:14.557134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.296 [2024-12-06 17:22:14.557143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.297 [2024-12-06 17:22:14.557163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.297 [2024-12-06 17:22:14.557184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.297 [2024-12-06 17:22:14.557204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.297 [2024-12-06 17:22:14.557233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.297 [2024-12-06 17:22:14.557255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.297 [2024-12-06 17:22:14.557287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.297 [2024-12-06 17:22:14.557307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.297 [2024-12-06 17:22:14.557327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.557983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.557998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.558013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.558028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.558041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.558056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.297 [2024-12-06 17:22:14.558068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.297 [2024-12-06 17:22:14.558083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.298 [2024-12-06 17:22:14.558095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.298 [2024-12-06 17:22:14.558140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.298 [2024-12-06 17:22:14.558156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84600 len:8 PRP1 0x0 PRP2 0x0 00:22:32.298 [2024-12-06 17:22:14.558168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.298 [2024-12-06 17:22:14.558186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.298 [2024-12-06 17:22:14.558197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.298 [2024-12-06 17:22:14.558207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84608 len:8 PRP1 0x0 PRP2 0x0 00:22:32.298 [2024-12-06 17:22:14.558220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.298 [2024-12-06 17:22:14.558585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:32.298 [2024-12-06 17:22:14.558892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2f30 (9): Bad file descriptor 00:22:32.298 [2024-12-06 17:22:14.559200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:32.298 [2024-12-06 17:22:14.559329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b2f30 with addr=10.0.0.3, port=4420 00:22:32.298 [2024-12-06 17:22:14.559457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2f30 is same with the state(6) to be set 00:22:32.298 [2024-12-06 17:22:14.559641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2f30 (9): Bad file descriptor 00:22:32.298 [2024-12-06 17:22:14.559812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:32.298 [2024-12-06 17:22:14.559955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:32.298 [2024-12-06 17:22:14.560115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:32.298 [2024-12-06 17:22:14.560281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:32.298 [2024-12-06 17:22:14.560394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:32.298 17:22:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:22:33.487 5224.50 IOPS, 20.41 MiB/s [2024-12-06T17:22:15.785Z] [2024-12-06 17:22:15.560587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:33.487 [2024-12-06 17:22:15.560822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b2f30 with addr=10.0.0.3, port=4420 00:22:33.487 [2024-12-06 17:22:15.560984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2f30 is same with the state(6) to be set 00:22:33.487 [2024-12-06 17:22:15.561021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2f30 (9): Bad file descriptor 00:22:33.487 [2024-12-06 17:22:15.561042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:33.487 [2024-12-06 17:22:15.561053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:33.487 [2024-12-06 17:22:15.561064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:33.487 [2024-12-06 17:22:15.561076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:33.487 [2024-12-06 17:22:15.561088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:33.487 17:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:33.745 [2024-12-06 17:22:15.866626] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:33.745 17:22:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97153 00:22:34.570 3483.00 IOPS, 13.61 MiB/s [2024-12-06T17:22:16.868Z] [2024-12-06 17:22:16.575871] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:36.082 2612.25 IOPS, 10.20 MiB/s [2024-12-06T17:22:19.312Z] 3226.00 IOPS, 12.60 MiB/s [2024-12-06T17:22:20.246Z] 4181.67 IOPS, 16.33 MiB/s [2024-12-06T17:22:21.623Z] 4849.43 IOPS, 18.94 MiB/s [2024-12-06T17:22:22.559Z] 5366.00 IOPS, 20.96 MiB/s [2024-12-06T17:22:23.494Z] 5751.33 IOPS, 22.47 MiB/s [2024-12-06T17:22:23.494Z] 6051.40 IOPS, 23.64 MiB/s 00:22:41.196 Latency(us) 00:22:41.196 [2024-12-06T17:22:23.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.196 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:41.196 Verification LBA range: start 0x0 length 0x4000 00:22:41.196 NVMe0n1 : 10.01 6056.99 23.66 0.00 0.00 21091.99 2234.18 3019898.88 00:22:41.196 [2024-12-06T17:22:23.494Z] =================================================================================================================== 00:22:41.196 [2024-12-06T17:22:23.494Z] Total : 6056.99 23.66 0.00 0.00 21091.99 2234.18 3019898.88 00:22:41.196 { 00:22:41.196 "results": [ 00:22:41.196 { 00:22:41.196 "job": "NVMe0n1", 00:22:41.196 "core_mask": "0x4", 00:22:41.196 "workload": "verify", 00:22:41.196 "status": "finished", 00:22:41.196 "verify_range": { 00:22:41.196 "start": 0, 00:22:41.196 "length": 16384 00:22:41.196 }, 00:22:41.196 "queue_depth": 128, 00:22:41.196 "io_size": 4096, 00:22:41.196 "runtime": 10.01191, 00:22:41.196 "iops": 6056.986129519742, 00:22:41.196 "mibps": 23.66010206843649, 00:22:41.196 "io_failed": 0, 00:22:41.196 "io_timeout": 0, 00:22:41.196 "avg_latency_us": 21091.99064602691, 00:22:41.196 "min_latency_us": 2234.181818181818, 00:22:41.196 "max_latency_us": 3019898.88 00:22:41.196 } 00:22:41.196 ], 00:22:41.196 "core_count": 1 00:22:41.196 } 00:22:41.196 17:22:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97264 00:22:41.196 17:22:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:41.196 17:22:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:22:41.196 Running I/O for 10 seconds... 00:22:42.130 17:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:42.392 8399.00 IOPS, 32.81 MiB/s [2024-12-06T17:22:24.690Z] [2024-12-06 17:22:24.608984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cced40 is same with the state(6) to be set 00:22:42.392 [2024-12-06 17:22:24.609553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.609965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.609983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.392 [2024-12-06 17:22:24.610395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.392 [2024-12-06 17:22:24.610413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.393 [2024-12-06 17:22:24.610431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.393 [2024-12-06 17:22:24.610465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.393 [2024-12-06 17:22:24.610499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.610969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.610986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:84416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.393 [2024-12-06 17:22:24.611697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.393 [2024-12-06 17:22:24.611715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.611731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.611749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.611765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.611785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.611802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.611821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.611839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.611859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.611874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.611892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.611909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.611929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.611946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.611965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.611983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.394 [2024-12-06 17:22:24.612483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.612975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.612995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.394 [2024-12-06 17:22:24.613012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.394 [2024-12-06 17:22:24.613032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.395 [2024-12-06 17:22:24.613390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.395 [2024-12-06 17:22:24.613427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.395 [2024-12-06 17:22:24.613453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.395 [2024-12-06 17:22:24.613478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.395 [2024-12-06 17:22:24.613514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.395 [2024-12-06 17:22:24.613552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.395 [2024-12-06 17:22:24.613586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:42.395 [2024-12-06 17:22:24.613624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:84120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:84136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.613968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.613984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.614003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.614021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.614041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:84160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.614057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.614077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.614098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.614118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.395 [2024-12-06 17:22:24.614133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.614183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:42.395 [2024-12-06 17:22:24.614203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:42.395 [2024-12-06 17:22:24.614221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84184 len:8 PRP1 0x0 PRP2 0x0 00:22:42.395 [2024-12-06 17:22:24.614237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.395 [2024-12-06 17:22:24.614582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:42.395 [2024-12-06 17:22:24.614743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2f30 (9): Bad file descriptor 00:22:42.395 [2024-12-06 17:22:24.614915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.395 [2024-12-06 17:22:24.614957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b2f30 with addr=10.0.0.3, port=4420 00:22:42.395 [2024-12-06 17:22:24.614981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2f30 is same with the state(6) to be set 00:22:42.395 [2024-12-06 17:22:24.615011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2f30 (9): Bad file descriptor 00:22:42.395 [2024-12-06 17:22:24.615039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:42.395 [2024-12-06 17:22:24.615063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:42.395 [2024-12-06 17:22:24.615081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:42.395 [2024-12-06 17:22:24.615099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:42.395 [2024-12-06 17:22:24.615117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:42.395 17:22:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:22:43.331 5230.00 IOPS, 20.43 MiB/s [2024-12-06T17:22:25.629Z] [2024-12-06 17:22:25.615295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:43.331 [2024-12-06 17:22:25.615387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b2f30 with addr=10.0.0.3, port=4420 00:22:43.331 [2024-12-06 17:22:25.615412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2f30 is same with the state(6) to be set 00:22:43.331 [2024-12-06 17:22:25.615453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2f30 (9): Bad file descriptor 00:22:43.331 [2024-12-06 17:22:25.615483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:43.331 [2024-12-06 17:22:25.615499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:43.331 [2024-12-06 17:22:25.615516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:43.331 [2024-12-06 17:22:25.615532] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:43.331 [2024-12-06 17:22:25.615548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:44.524 3486.67 IOPS, 13.62 MiB/s [2024-12-06T17:22:26.822Z] [2024-12-06 17:22:26.615730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.524 [2024-12-06 17:22:26.615806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b2f30 with addr=10.0.0.3, port=4420 00:22:44.524 [2024-12-06 17:22:26.615823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2f30 is same with the state(6) to be set 00:22:44.524 [2024-12-06 17:22:26.615852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2f30 (9): Bad file descriptor 00:22:44.524 [2024-12-06 17:22:26.615872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:44.524 [2024-12-06 17:22:26.615883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:44.524 [2024-12-06 17:22:26.615894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:44.524 [2024-12-06 17:22:26.615906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:44.524 [2024-12-06 17:22:26.615918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:45.348 2615.00 IOPS, 10.21 MiB/s [2024-12-06T17:22:27.646Z] [2024-12-06 17:22:27.619378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:45.348 [2024-12-06 17:22:27.619455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b2f30 with addr=10.0.0.3, port=4420 00:22:45.348 [2024-12-06 17:22:27.619472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b2f30 is same with the state(6) to be set 00:22:45.348 [2024-12-06 17:22:27.619735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2f30 (9): Bad file descriptor 00:22:45.348 [2024-12-06 17:22:27.620000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:22:45.348 [2024-12-06 17:22:27.620021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:22:45.348 [2024-12-06 17:22:27.620033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:45.348 [2024-12-06 17:22:27.620044] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:22:45.348 [2024-12-06 17:22:27.620055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:45.348 17:22:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:45.916 [2024-12-06 17:22:27.922985] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:45.916 17:22:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97264 00:22:46.433 2092.00 IOPS, 8.17 MiB/s [2024-12-06T17:22:28.731Z] [2024-12-06 17:22:28.648333] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:22:48.305 2796.00 IOPS, 10.92 MiB/s [2024-12-06T17:22:31.540Z] 3645.57 IOPS, 14.24 MiB/s [2024-12-06T17:22:32.476Z] 4309.25 IOPS, 16.83 MiB/s [2024-12-06T17:22:33.413Z] 4806.33 IOPS, 18.77 MiB/s [2024-12-06T17:22:33.413Z] 5215.90 IOPS, 20.37 MiB/s 00:22:51.115 Latency(us) 00:22:51.115 [2024-12-06T17:22:33.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.115 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:51.115 Verification LBA range: start 0x0 length 0x4000 00:22:51.115 NVMe0n1 : 10.01 5222.31 20.40 3500.93 0.00 14642.43 912.29 3019898.88 00:22:51.115 [2024-12-06T17:22:33.413Z] =================================================================================================================== 00:22:51.115 [2024-12-06T17:22:33.413Z] Total : 5222.31 20.40 3500.93 0.00 14642.43 0.00 3019898.88 00:22:51.115 { 00:22:51.115 "results": [ 00:22:51.115 { 00:22:51.115 "job": "NVMe0n1", 00:22:51.115 "core_mask": "0x4", 00:22:51.115 "workload": "verify", 00:22:51.115 "status": "finished", 00:22:51.115 "verify_range": { 00:22:51.115 "start": 0, 00:22:51.115 "length": 16384 00:22:51.115 }, 00:22:51.115 "queue_depth": 128, 00:22:51.115 "io_size": 4096, 00:22:51.115 "runtime": 10.005916, 00:22:51.115 "iops": 5222.31048111937, 00:22:51.115 "mibps": 20.39965031687254, 00:22:51.115 "io_failed": 35030, 00:22:51.115 "io_timeout": 0, 00:22:51.115 "avg_latency_us": 14642.430414071516, 00:22:51.115 "min_latency_us": 912.290909090909, 00:22:51.115 "max_latency_us": 3019898.88 00:22:51.115 } 00:22:51.115 ], 00:22:51.115 "core_count": 1 00:22:51.115 } 00:22:51.115 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97101 00:22:51.115 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97101 ']' 00:22:51.115 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97101 00:22:51.115 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:22:51.115 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:51.115 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97101 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:51.466 killing process with pid 97101 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97101' 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97101 00:22:51.466 Received shutdown signal, test time was about 10.000000 seconds 00:22:51.466 00:22:51.466 Latency(us) 00:22:51.466 [2024-12-06T17:22:33.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.466 [2024-12-06T17:22:33.764Z] =================================================================================================================== 00:22:51.466 [2024-12-06T17:22:33.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97101 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97391 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97391 /var/tmp/bdevperf.sock 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97391 ']' 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.466 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:51.466 [2024-12-06 17:22:33.611194] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:22:51.466 [2024-12-06 17:22:33.611347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97391 ] 00:22:51.724 [2024-12-06 17:22:33.765044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.724 [2024-12-06 17:22:33.803597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.725 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.725 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:51.725 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97401 00:22:51.725 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97391 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:22:51.725 17:22:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:22:51.982 17:22:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:52.240 NVMe0n1 00:22:52.241 17:22:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97449 00:22:52.241 17:22:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:52.241 17:22:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:22:52.500 Running I/O for 10 seconds... 00:22:53.433 17:22:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:53.694 17104.00 IOPS, 66.81 MiB/s [2024-12-06T17:22:35.992Z] [2024-12-06 17:22:35.803889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.803945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.803956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.803965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.803974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.803982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.803992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.694 [2024-12-06 17:22:35.804170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.695 [2024-12-06 17:22:35.804791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.804977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd2250 is same with the state(6) to be set 00:22:53.696 [2024-12-06 17:22:35.805158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.696 [2024-12-06 17:22:35.805708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.696 [2024-12-06 17:22:35.805719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.805982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.805991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:34592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.697 [2024-12-06 17:22:35.806487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.697 [2024-12-06 17:22:35.806495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:67512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:48952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.806985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.806996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.698 [2024-12-06 17:22:35.807230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.698 [2024-12-06 17:22:35.807241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:26760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.699 [2024-12-06 17:22:35.807814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.699 [2024-12-06 17:22:35.807825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.700 [2024-12-06 17:22:35.807835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.700 [2024-12-06 17:22:35.807846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1117400 is same with the state(6) to be set 00:22:53.700 [2024-12-06 17:22:35.807857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:53.700 [2024-12-06 17:22:35.807866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:53.700 [2024-12-06 17:22:35.807874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2288 len:8 PRP1 0x0 PRP2 0x0 00:22:53.700 [2024-12-06 17:22:35.807883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.700 [2024-12-06 17:22:35.808204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:53.700 [2024-12-06 17:22:35.808307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10abf30 (9): Bad file descriptor 00:22:53.700 [2024-12-06 17:22:35.808434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.700 [2024-12-06 17:22:35.808457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10abf30 with addr=10.0.0.3, port=4420 00:22:53.700 [2024-12-06 17:22:35.808468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abf30 is same with the state(6) to be set 00:22:53.700 [2024-12-06 17:22:35.808486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10abf30 (9): Bad file descriptor 00:22:53.700 [2024-12-06 17:22:35.808515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:53.700 [2024-12-06 17:22:35.808525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:53.700 [2024-12-06 17:22:35.808536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:53.700 [2024-12-06 17:22:35.808547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:53.700 [2024-12-06 17:22:35.808557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:53.700 17:22:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97449 00:22:55.565 10166.00 IOPS, 39.71 MiB/s [2024-12-06T17:22:37.863Z] 6777.33 IOPS, 26.47 MiB/s [2024-12-06T17:22:37.863Z] [2024-12-06 17:22:37.808765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.565 [2024-12-06 17:22:37.808833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10abf30 with addr=10.0.0.3, port=4420 00:22:55.565 [2024-12-06 17:22:37.808851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abf30 is same with the state(6) to be set 00:22:55.565 [2024-12-06 17:22:37.808882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10abf30 (9): Bad file descriptor 00:22:55.565 [2024-12-06 17:22:37.808937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:55.565 [2024-12-06 17:22:37.808953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:55.565 [2024-12-06 17:22:37.808989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:55.565 [2024-12-06 17:22:37.809008] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:55.565 [2024-12-06 17:22:37.809020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:57.431 5083.00 IOPS, 19.86 MiB/s [2024-12-06T17:22:39.988Z] 4066.40 IOPS, 15.88 MiB/s [2024-12-06T17:22:39.988Z] [2024-12-06 17:22:39.809182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:57.690 [2024-12-06 17:22:39.809245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10abf30 with addr=10.0.0.3, port=4420 00:22:57.690 [2024-12-06 17:22:39.809261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10abf30 is same with the state(6) to be set 00:22:57.690 [2024-12-06 17:22:39.809300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10abf30 (9): Bad file descriptor 00:22:57.690 [2024-12-06 17:22:39.809321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:57.690 [2024-12-06 17:22:39.809331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:57.690 [2024-12-06 17:22:39.809343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:57.690 [2024-12-06 17:22:39.809354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:22:57.690 [2024-12-06 17:22:39.809365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:59.570 3388.67 IOPS, 13.24 MiB/s [2024-12-06T17:22:41.868Z] 2904.57 IOPS, 11.35 MiB/s [2024-12-06T17:22:41.868Z] [2024-12-06 17:22:41.809509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:59.570 [2024-12-06 17:22:41.809577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:22:59.570 [2024-12-06 17:22:41.809590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:22:59.570 [2024-12-06 17:22:41.809600] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:22:59.570 [2024-12-06 17:22:41.809612] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:00.760 2541.50 IOPS, 9.93 MiB/s 00:23:00.760 Latency(us) 00:23:00.760 [2024-12-06T17:22:43.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.760 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:00.760 NVMe0n1 : 8.19 2481.60 9.69 15.62 0.00 51199.07 3485.32 7015926.69 00:23:00.760 [2024-12-06T17:22:43.058Z] =================================================================================================================== 00:23:00.760 [2024-12-06T17:22:43.058Z] Total : 2481.60 9.69 15.62 0.00 51199.07 3485.32 7015926.69 00:23:00.760 { 00:23:00.760 "results": [ 00:23:00.760 { 00:23:00.760 "job": "NVMe0n1", 00:23:00.760 "core_mask": "0x4", 00:23:00.760 "workload": "randread", 00:23:00.760 "status": "finished", 00:23:00.760 "queue_depth": 128, 00:23:00.760 "io_size": 4096, 00:23:00.760 "runtime": 8.193102, 00:23:00.760 "iops": 2481.5997652659516, 00:23:00.760 "mibps": 9.693749083070124, 00:23:00.760 "io_failed": 128, 00:23:00.760 "io_timeout": 0, 00:23:00.760 "avg_latency_us": 51199.07182084777, 00:23:00.760 "min_latency_us": 3485.3236363636365, 00:23:00.760 "max_latency_us": 7015926.69090909 00:23:00.760 } 00:23:00.760 ], 00:23:00.760 "core_count": 1 00:23:00.760 } 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:00.760 Attaching 5 probes... 00:23:00.760 1356.040595: reset bdev controller NVMe0 00:23:00.760 1356.219494: reconnect bdev controller NVMe0 00:23:00.760 3356.455247: reconnect delay bdev controller NVMe0 00:23:00.760 3356.477348: reconnect bdev controller NVMe0 00:23:00.760 5356.909041: reconnect delay bdev controller NVMe0 00:23:00.760 5356.933254: reconnect bdev controller NVMe0 00:23:00.760 7357.329479: reconnect delay bdev controller NVMe0 00:23:00.760 7357.353258: reconnect bdev controller NVMe0 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97401 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97391 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97391 ']' 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97391 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97391 00:23:00.760 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:00.760 killing process with pid 97391 00:23:00.761 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:00.761 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97391' 00:23:00.761 Received shutdown signal, test time was about 8.251158 seconds 00:23:00.761 00:23:00.761 Latency(us) 00:23:00.761 [2024-12-06T17:22:43.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.761 [2024-12-06T17:22:43.059Z] =================================================================================================================== 00:23:00.761 [2024-12-06T17:22:43.059Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:00.761 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97391 00:23:00.761 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97391 00:23:00.761 17:22:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:01.019 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:01.019 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:01.019 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.019 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:01.019 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.019 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:01.019 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.019 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.019 rmmod nvme_tcp 00:23:01.019 rmmod nvme_fabrics 00:23:01.278 rmmod nvme_keyring 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 96830 ']' 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 96830 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96830 ']' 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96830 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96830 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:01.278 killing process with pid 96830 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96830' 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96830 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96830 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:01.278 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:01.537 00:23:01.537 real 0m45.865s 00:23:01.537 user 2m15.991s 00:23:01.537 sys 0m4.591s 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:01.537 ************************************ 00:23:01.537 END TEST nvmf_timeout 00:23:01.537 ************************************ 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:01.537 00:23:01.537 real 5m38.130s 00:23:01.537 user 14m36.618s 00:23:01.537 sys 1m0.758s 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:01.537 ************************************ 00:23:01.537 END TEST nvmf_host 00:23:01.537 17:22:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.537 ************************************ 00:23:01.537 17:22:43 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:01.537 17:22:43 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:23:01.537 17:22:43 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:01.537 17:22:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:01.537 17:22:43 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.798 17:22:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.798 ************************************ 00:23:01.798 START TEST nvmf_target_core_interrupt_mode 00:23:01.798 ************************************ 00:23:01.798 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:23:01.798 * Looking for test storage... 00:23:01.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:23:01.798 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:01.798 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:23:01.798 17:22:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:01.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.798 --rc genhtml_branch_coverage=1 00:23:01.798 --rc genhtml_function_coverage=1 00:23:01.798 --rc genhtml_legend=1 00:23:01.798 --rc geninfo_all_blocks=1 00:23:01.798 --rc geninfo_unexecuted_blocks=1 00:23:01.798 00:23:01.798 ' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:01.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.798 --rc genhtml_branch_coverage=1 00:23:01.798 --rc genhtml_function_coverage=1 00:23:01.798 --rc genhtml_legend=1 00:23:01.798 --rc geninfo_all_blocks=1 00:23:01.798 --rc geninfo_unexecuted_blocks=1 00:23:01.798 00:23:01.798 ' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:01.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.798 --rc genhtml_branch_coverage=1 00:23:01.798 --rc genhtml_function_coverage=1 00:23:01.798 --rc genhtml_legend=1 00:23:01.798 --rc geninfo_all_blocks=1 00:23:01.798 --rc geninfo_unexecuted_blocks=1 00:23:01.798 00:23:01.798 ' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:01.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.798 --rc genhtml_branch_coverage=1 00:23:01.798 --rc genhtml_function_coverage=1 00:23:01.798 --rc genhtml_legend=1 00:23:01.798 --rc geninfo_all_blocks=1 00:23:01.798 --rc geninfo_unexecuted_blocks=1 00:23:01.798 00:23:01.798 ' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:01.798 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:01.799 ************************************ 00:23:01.799 START TEST nvmf_abort 00:23:01.799 ************************************ 00:23:01.799 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:23:02.058 * Looking for test storage... 00:23:02.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:02.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.058 --rc genhtml_branch_coverage=1 00:23:02.058 --rc genhtml_function_coverage=1 00:23:02.058 --rc genhtml_legend=1 00:23:02.058 --rc geninfo_all_blocks=1 00:23:02.058 --rc geninfo_unexecuted_blocks=1 00:23:02.058 00:23:02.058 ' 00:23:02.058 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:02.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.059 --rc genhtml_branch_coverage=1 00:23:02.059 --rc genhtml_function_coverage=1 00:23:02.059 --rc genhtml_legend=1 00:23:02.059 --rc geninfo_all_blocks=1 00:23:02.059 --rc geninfo_unexecuted_blocks=1 00:23:02.059 00:23:02.059 ' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:02.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.059 --rc genhtml_branch_coverage=1 00:23:02.059 --rc genhtml_function_coverage=1 00:23:02.059 --rc genhtml_legend=1 00:23:02.059 --rc geninfo_all_blocks=1 00:23:02.059 --rc geninfo_unexecuted_blocks=1 00:23:02.059 00:23:02.059 ' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:02.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:02.059 --rc genhtml_branch_coverage=1 00:23:02.059 --rc genhtml_function_coverage=1 00:23:02.059 --rc genhtml_legend=1 00:23:02.059 --rc geninfo_all_blocks=1 00:23:02.059 --rc geninfo_unexecuted_blocks=1 00:23:02.059 00:23:02.059 ' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:02.059 Cannot find device "nvmf_init_br" 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:23:02.059 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:02.060 Cannot find device "nvmf_init_br2" 00:23:02.060 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:23:02.060 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:02.060 Cannot find device "nvmf_tgt_br" 00:23:02.060 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:23:02.060 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:02.060 Cannot find device "nvmf_tgt_br2" 00:23:02.060 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:23:02.060 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:02.318 Cannot find device "nvmf_init_br" 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:02.318 Cannot find device "nvmf_init_br2" 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:02.318 Cannot find device "nvmf_tgt_br" 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:02.318 Cannot find device "nvmf_tgt_br2" 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:02.318 Cannot find device "nvmf_br" 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:02.318 Cannot find device "nvmf_init_if" 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:02.318 Cannot find device "nvmf_init_if2" 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:02.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:02.318 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:02.318 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:02.577 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:02.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:02.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:23:02.578 00:23:02.578 --- 10.0.0.3 ping statistics --- 00:23:02.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.578 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:02.578 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:02.578 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:23:02.578 00:23:02.578 --- 10.0.0.4 ping statistics --- 00:23:02.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.578 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:02.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:02.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:02.578 00:23:02.578 --- 10.0.0.1 ping statistics --- 00:23:02.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.578 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:02.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:02.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:23:02.578 00:23:02.578 --- 10.0.0.2 ping statistics --- 00:23:02.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:02.578 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=97859 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 97859 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 97859 ']' 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.578 17:22:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.578 [2024-12-06 17:22:44.741801] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:02.578 [2024-12-06 17:22:44.743045] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:23:02.578 [2024-12-06 17:22:44.743105] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.837 [2024-12-06 17:22:44.892557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:02.837 [2024-12-06 17:22:44.925113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.837 [2024-12-06 17:22:44.925163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.837 [2024-12-06 17:22:44.925173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.837 [2024-12-06 17:22:44.925181] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.837 [2024-12-06 17:22:44.925188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.837 [2024-12-06 17:22:44.925868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.837 [2024-12-06 17:22:44.925957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.837 [2024-12-06 17:22:44.925954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.837 [2024-12-06 17:22:44.974295] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:02.837 [2024-12-06 17:22:44.974303] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:02.837 [2024-12-06 17:22:44.974898] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:02.837 [2024-12-06 17:22:44.977565] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 [2024-12-06 17:22:45.054904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 Malloc0 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 Delay0 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:02.837 [2024-12-06 17:22:45.122872] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.837 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:03.096 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.096 17:22:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:23:03.096 [2024-12-06 17:22:45.304164] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:05.623 Initializing NVMe Controllers 00:23:05.623 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:23:05.623 controller IO queue size 128 less than required 00:23:05.623 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:23:05.623 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:23:05.623 Initialization complete. Launching workers. 00:23:05.624 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28307 00:23:05.624 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28364, failed to submit 66 00:23:05.624 success 28307, unsuccessful 57, failed 0 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:05.624 rmmod nvme_tcp 00:23:05.624 rmmod nvme_fabrics 00:23:05.624 rmmod nvme_keyring 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 97859 ']' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 97859 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 97859 ']' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 97859 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97859 00:23:05.624 killing process with pid 97859 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97859' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 97859 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 97859 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:23:05.624 00:23:05.624 real 0m3.774s 00:23:05.624 user 0m8.816s 00:23:05.624 sys 0m1.416s 00:23:05.624 ************************************ 00:23:05.624 END TEST nvmf_abort 00:23:05.624 ************************************ 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:05.624 ************************************ 00:23:05.624 START TEST nvmf_ns_hotplug_stress 00:23:05.624 ************************************ 00:23:05.624 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:23:05.884 * Looking for test storage... 00:23:05.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:05.884 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:05.884 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:23:05.884 17:22:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:23:05.884 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:05.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.885 --rc genhtml_branch_coverage=1 00:23:05.885 --rc genhtml_function_coverage=1 00:23:05.885 --rc genhtml_legend=1 00:23:05.885 --rc geninfo_all_blocks=1 00:23:05.885 --rc geninfo_unexecuted_blocks=1 00:23:05.885 00:23:05.885 ' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:05.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.885 --rc genhtml_branch_coverage=1 00:23:05.885 --rc genhtml_function_coverage=1 00:23:05.885 --rc genhtml_legend=1 00:23:05.885 --rc geninfo_all_blocks=1 00:23:05.885 --rc geninfo_unexecuted_blocks=1 00:23:05.885 00:23:05.885 ' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:05.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.885 --rc genhtml_branch_coverage=1 00:23:05.885 --rc genhtml_function_coverage=1 00:23:05.885 --rc genhtml_legend=1 00:23:05.885 --rc geninfo_all_blocks=1 00:23:05.885 --rc geninfo_unexecuted_blocks=1 00:23:05.885 00:23:05.885 ' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:05.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:05.885 --rc genhtml_branch_coverage=1 00:23:05.885 --rc genhtml_function_coverage=1 00:23:05.885 --rc genhtml_legend=1 00:23:05.885 --rc geninfo_all_blocks=1 00:23:05.885 --rc geninfo_unexecuted_blocks=1 00:23:05.885 00:23:05.885 ' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:05.885 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:05.886 Cannot find device "nvmf_init_br" 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:05.886 Cannot find device "nvmf_init_br2" 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:23:05.886 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:06.144 Cannot find device "nvmf_tgt_br" 00:23:06.144 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:23:06.144 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:06.144 Cannot find device "nvmf_tgt_br2" 00:23:06.144 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:23:06.144 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:06.144 Cannot find device "nvmf_init_br" 00:23:06.144 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:23:06.144 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:06.144 Cannot find device "nvmf_init_br2" 00:23:06.144 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:23:06.144 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:06.144 Cannot find device "nvmf_tgt_br" 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:06.145 Cannot find device "nvmf_tgt_br2" 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:06.145 Cannot find device "nvmf_br" 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:06.145 Cannot find device "nvmf_init_if" 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:06.145 Cannot find device "nvmf_init_if2" 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:06.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:06.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:06.145 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:06.403 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:06.403 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:23:06.403 00:23:06.403 --- 10.0.0.3 ping statistics --- 00:23:06.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.403 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:06.403 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:06.403 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:23:06.403 00:23:06.403 --- 10.0.0.4 ping statistics --- 00:23:06.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.403 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:06.403 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.403 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:23:06.403 00:23:06.403 --- 10.0.0.1 ping statistics --- 00:23:06.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.403 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:06.403 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.403 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:23:06.403 00:23:06.403 --- 10.0.0.2 ping statistics --- 00:23:06.403 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.403 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:06.403 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:06.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=98139 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 98139 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 98139 ']' 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.404 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:06.404 [2024-12-06 17:22:48.617358] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:06.404 [2024-12-06 17:22:48.618798] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:23:06.404 [2024-12-06 17:22:48.619032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.662 [2024-12-06 17:22:48.772456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:06.662 [2024-12-06 17:22:48.810823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.662 [2024-12-06 17:22:48.811036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.662 [2024-12-06 17:22:48.811227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.662 [2024-12-06 17:22:48.811355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.662 [2024-12-06 17:22:48.811370] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.662 [2024-12-06 17:22:48.812149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.662 [2024-12-06 17:22:48.812781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.662 [2024-12-06 17:22:48.812842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.662 [2024-12-06 17:22:48.870979] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:06.662 [2024-12-06 17:22:48.871228] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:06.662 [2024-12-06 17:22:48.871566] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:23:06.662 [2024-12-06 17:22:48.872488] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:06.662 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.662 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:23:06.662 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.662 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.662 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:06.662 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.662 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:23:06.662 17:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:07.231 [2024-12-06 17:22:49.270419] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.231 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:07.489 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:07.748 [2024-12-06 17:22:49.870574] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:07.748 17:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:23:08.007 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:23:08.265 Malloc0 00:23:08.265 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:08.526 Delay0 00:23:08.784 17:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:09.043 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:23:09.301 NULL1 00:23:09.302 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:23:09.560 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:23:09.560 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=98258 00:23:09.560 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:09.560 17:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:10.937 Read completed with error (sct=0, sc=11) 00:23:10.937 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:10.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:10.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:10.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:10.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:10.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:10.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:11.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:11.196 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:11.196 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:23:11.196 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:23:11.456 true 00:23:11.456 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:11.456 17:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:12.390 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:12.390 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:23:12.390 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:23:12.647 true 00:23:12.913 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:12.913 17:22:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:13.178 17:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:13.436 17:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:23:13.436 17:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:23:13.694 true 00:23:13.694 17:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:13.694 17:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:13.951 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:14.209 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:23:14.209 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:23:14.466 true 00:23:14.466 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:14.466 17:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:15.033 17:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:15.291 17:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:23:15.291 17:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:23:15.549 true 00:23:15.549 17:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:15.549 17:22:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:16.116 17:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:16.684 17:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:23:16.684 17:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:23:16.684 true 00:23:16.942 17:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:16.942 17:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:17.200 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:17.459 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:23:17.459 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:23:17.717 true 00:23:17.717 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:17.717 17:22:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:17.975 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:18.233 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:23:18.233 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:23:18.491 true 00:23:18.491 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:18.491 17:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:19.058 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:19.317 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:23:19.317 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:23:19.575 true 00:23:19.575 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:19.575 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:19.833 17:23:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:20.091 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:23:20.091 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:23:20.349 true 00:23:20.349 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:20.349 17:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:21.281 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:21.538 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:23:21.538 17:23:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:23:21.796 true 00:23:21.796 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:21.796 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:22.053 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:22.311 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:23:22.311 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:23:22.877 true 00:23:22.877 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:22.877 17:23:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:23.135 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:23.391 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:23:23.391 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:23:23.649 true 00:23:23.649 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:23.649 17:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:23.906 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:24.163 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:23:24.163 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:23:24.420 true 00:23:24.420 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:24.420 17:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:25.353 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:25.611 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:23:25.611 17:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:23:25.869 true 00:23:25.869 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:25.869 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:26.128 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:26.386 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:23:26.386 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:23:26.645 true 00:23:26.645 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:26.645 17:23:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:26.903 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:27.161 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:23:27.161 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:23:27.728 true 00:23:27.728 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:27.728 17:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:27.728 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:28.297 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:23:28.297 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:23:28.297 true 00:23:28.297 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:28.297 17:23:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:29.264 17:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:29.523 17:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:23:29.523 17:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:23:29.781 true 00:23:29.781 17:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:29.781 17:23:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:30.039 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:30.297 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:23:30.297 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:23:30.862 true 00:23:30.862 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:30.862 17:23:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:31.120 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:31.378 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:23:31.378 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:23:31.636 true 00:23:31.636 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:31.636 17:23:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:31.894 17:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:32.154 17:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:23:32.154 17:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:23:32.412 true 00:23:32.412 17:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:32.412 17:23:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:33.345 17:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:33.603 17:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:23:33.603 17:23:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:23:33.861 true 00:23:33.861 17:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:33.861 17:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:34.118 17:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:34.375 17:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:23:34.375 17:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:23:34.633 true 00:23:34.893 17:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:34.893 17:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:35.153 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:35.412 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:23:35.412 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:23:35.670 true 00:23:35.670 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:35.670 17:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:35.929 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:36.186 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:23:36.186 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:23:36.445 true 00:23:36.445 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:36.445 17:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:37.382 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:37.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:37.640 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:23:37.640 17:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:23:37.898 true 00:23:37.898 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:37.898 17:23:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:39.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.298 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:39.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.298 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:23:39.815 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:23:39.815 17:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:23:40.074 true 00:23:40.074 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:40.074 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:40.642 Initializing NVMe Controllers 00:23:40.642 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.642 Controller IO queue size 128, less than required. 00:23:40.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.642 Controller IO queue size 128, less than required. 00:23:40.642 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.642 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:40.642 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:40.642 Initialization complete. Launching workers. 00:23:40.642 ======================================================== 00:23:40.642 Latency(us) 00:23:40.642 Device Information : IOPS MiB/s Average min max 00:23:40.642 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 671.93 0.33 71434.46 3075.77 1027907.68 00:23:40.642 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7070.58 3.45 18102.97 3709.96 685406.79 00:23:40.642 ======================================================== 00:23:40.642 Total : 7742.51 3.78 22731.32 3075.77 1027907.68 00:23:40.642 00:23:40.642 17:23:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:40.900 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:23:40.900 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:23:41.159 true 00:23:41.417 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 98258 00:23:41.417 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (98258) - No such process 00:23:41.417 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 98258 00:23:41.417 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:41.675 17:23:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:41.933 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:23:41.933 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:23:41.933 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:23:41.933 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:41.933 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:23:42.192 null0 00:23:42.192 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:42.192 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:42.192 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:23:42.450 null1 00:23:42.451 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:42.451 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:42.451 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:23:42.709 null2 00:23:42.709 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:42.709 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:42.709 17:23:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:23:42.966 null3 00:23:42.966 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:42.966 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:42.966 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:23:43.224 null4 00:23:43.224 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:43.224 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:43.224 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:23:43.790 null5 00:23:43.790 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:43.790 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:43.790 17:23:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:23:43.790 null6 00:23:43.790 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:43.790 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:43.790 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:23:44.376 null7 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:44.376 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 99283 99284 99287 99289 99291 99292 99294 99296 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.377 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:44.637 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:44.637 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:44.637 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:44.637 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:44.637 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:44.637 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:44.637 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:44.637 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:44.894 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:44.894 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.894 17:23:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:44.894 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:45.152 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:45.152 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:45.152 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:45.152 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:45.152 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:45.152 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.411 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:45.668 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:45.926 17:23:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:45.926 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:46.184 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.443 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:46.702 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:46.702 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.702 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.702 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.703 17:23:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:46.961 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.220 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:47.479 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:47.738 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:47.738 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:47.738 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:47.738 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.738 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.738 17:23:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:47.738 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.738 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.738 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:47.996 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:48.256 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:48.514 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.772 17:23:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:48.772 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:48.772 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:48.772 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:49.030 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:49.287 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.595 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:23:49.852 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:49.852 17:23:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:23:49.852 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.110 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:23:50.368 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.368 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.368 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.368 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.368 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:23:50.368 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:23:50.368 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.626 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.883 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.883 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.883 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.883 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.883 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.883 17:23:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.883 rmmod nvme_tcp 00:23:50.883 rmmod nvme_fabrics 00:23:50.883 rmmod nvme_keyring 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 98139 ']' 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 98139 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 98139 ']' 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 98139 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98139 00:23:50.883 killing process with pid 98139 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98139' 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 98139 00:23:50.883 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 98139 00:23:51.140 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.140 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.140 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.140 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:23:51.140 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:23:51.140 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:51.141 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:23:51.399 00:23:51.399 real 0m45.637s 00:23:51.399 user 3m28.186s 00:23:51.399 sys 0m18.834s 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:23:51.399 ************************************ 00:23:51.399 END TEST nvmf_ns_hotplug_stress 00:23:51.399 ************************************ 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:23:51.399 ************************************ 00:23:51.399 START TEST nvmf_delete_subsystem 00:23:51.399 ************************************ 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:23:51.399 * Looking for test storage... 00:23:51.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:23:51.399 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.660 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:51.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.661 --rc genhtml_branch_coverage=1 00:23:51.661 --rc genhtml_function_coverage=1 00:23:51.661 --rc genhtml_legend=1 00:23:51.661 --rc geninfo_all_blocks=1 00:23:51.661 --rc geninfo_unexecuted_blocks=1 00:23:51.661 00:23:51.661 ' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:51.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.661 --rc genhtml_branch_coverage=1 00:23:51.661 --rc genhtml_function_coverage=1 00:23:51.661 --rc genhtml_legend=1 00:23:51.661 --rc geninfo_all_blocks=1 00:23:51.661 --rc geninfo_unexecuted_blocks=1 00:23:51.661 00:23:51.661 ' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:51.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.661 --rc genhtml_branch_coverage=1 00:23:51.661 --rc genhtml_function_coverage=1 00:23:51.661 --rc genhtml_legend=1 00:23:51.661 --rc geninfo_all_blocks=1 00:23:51.661 --rc geninfo_unexecuted_blocks=1 00:23:51.661 00:23:51.661 ' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:51.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.661 --rc genhtml_branch_coverage=1 00:23:51.661 --rc genhtml_function_coverage=1 00:23:51.661 --rc genhtml_legend=1 00:23:51.661 --rc geninfo_all_blocks=1 00:23:51.661 --rc geninfo_unexecuted_blocks=1 00:23:51.661 00:23:51.661 ' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.661 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:51.662 Cannot find device "nvmf_init_br" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:51.662 Cannot find device "nvmf_init_br2" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:51.662 Cannot find device "nvmf_tgt_br" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:51.662 Cannot find device "nvmf_tgt_br2" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:51.662 Cannot find device "nvmf_init_br" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:51.662 Cannot find device "nvmf_init_br2" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:51.662 Cannot find device "nvmf_tgt_br" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:51.662 Cannot find device "nvmf_tgt_br2" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:51.662 Cannot find device "nvmf_br" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:51.662 Cannot find device "nvmf_init_if" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:51.662 Cannot find device "nvmf_init_if2" 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:51.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:51.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:23:51.662 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:51.922 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:51.922 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:51.922 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:51.922 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:51.922 17:23:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:51.922 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:51.922 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.118 ms 00:23:51.922 00:23:51.922 --- 10.0.0.3 ping statistics --- 00:23:51.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.922 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:51.922 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:51.922 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.087 ms 00:23:51.922 00:23:51.922 --- 10.0.0.4 ping statistics --- 00:23:51.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.922 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:51.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:51.922 00:23:51.922 --- 10.0.0.1 ping statistics --- 00:23:51.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.922 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:51.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:23:51.922 00:23:51.922 --- 10.0.0.2 ping statistics --- 00:23:51.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.922 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.922 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:23:51.923 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.923 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.923 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.923 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.923 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.923 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.923 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=100673 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 100673 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 100673 ']' 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.182 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.182 [2024-12-06 17:23:34.289502] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:23:52.182 [2024-12-06 17:23:34.290776] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:23:52.182 [2024-12-06 17:23:34.290845] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.182 [2024-12-06 17:23:34.444427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:52.441 [2024-12-06 17:23:34.482283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.441 [2024-12-06 17:23:34.482347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.441 [2024-12-06 17:23:34.482361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.441 [2024-12-06 17:23:34.482371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.441 [2024-12-06 17:23:34.482380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.441 [2024-12-06 17:23:34.483232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.441 [2024-12-06 17:23:34.483245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.441 [2024-12-06 17:23:34.537943] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:23:52.441 [2024-12-06 17:23:34.538658] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:23:52.441 [2024-12-06 17:23:34.538670] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:23:52.441 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.441 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:23:52.441 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.441 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.441 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.442 [2024-12-06 17:23:34.616178] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.442 [2024-12-06 17:23:34.640632] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.442 NULL1 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.442 Delay0 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=100705 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:23:52.442 17:23:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:23:52.700 [2024-12-06 17:23:34.845488] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:54.601 17:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.601 17:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.601 17:23:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 starting I/O failed: -6 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Read completed with error (sct=0, sc=8) 00:23:54.601 Write completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 [2024-12-06 17:23:36.882123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3a7e0 is same with the state(6) to be set 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Write completed with error (sct=0, sc=8) 00:23:54.602 starting I/O failed: -6 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.602 Read completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Write completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Write completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Write completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Write completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Write completed with error (sct=0, sc=8) 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Write completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Write completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Write completed with error (sct=0, sc=8) 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 Read completed with error (sct=0, sc=8) 00:23:54.603 starting I/O failed: -6 00:23:54.603 starting I/O failed: -6 00:23:54.603 starting I/O failed: -6 00:23:54.603 starting I/O failed: -6 00:23:54.603 starting I/O failed: -6 00:23:54.603 starting I/O failed: -6 00:23:54.603 starting I/O failed: -6 00:23:55.980 [2024-12-06 17:23:37.862966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2eaa0 is same with the state(6) to be set 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 [2024-12-06 17:23:37.881459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc39a50 is same with the state(6) to be set 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 [2024-12-06 17:23:37.882234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa7a800d020 is same with the state(6) to be set 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Write completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.980 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 [2024-12-06 17:23:37.882673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc3cea0 is same with the state(6) to be set 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 Write completed with error (sct=0, sc=8) 00:23:55.981 Read completed with error (sct=0, sc=8) 00:23:55.981 [2024-12-06 17:23:37.883920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fa7a800d680 is same with the state(6) to be set 00:23:55.981 Initializing NVMe Controllers 00:23:55.981 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:55.981 Controller IO queue size 128, less than required. 00:23:55.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:55.981 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:23:55.981 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:23:55.981 Initialization complete. Launching workers. 00:23:55.981 ======================================================== 00:23:55.981 Latency(us) 00:23:55.981 Device Information : IOPS MiB/s Average min max 00:23:55.981 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 173.07 0.08 888656.00 476.26 1013516.50 00:23:55.981 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.96 0.09 907281.19 1920.37 1013920.91 00:23:55.981 ======================================================== 00:23:55.981 Total : 359.03 0.18 898303.02 476.26 1013920.91 00:23:55.981 00:23:55.981 [2024-12-06 17:23:37.884391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2eaa0 (9): Bad file descriptor 00:23:55.981 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:55.981 17:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.981 17:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:23:55.981 17:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100705 00:23:55.981 17:23:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100705 00:23:56.240 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (100705) - No such process 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 100705 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 100705 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 100705 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.240 [2024-12-06 17:23:38.409002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=100751 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100751 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:23:56.240 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:56.499 [2024-12-06 17:23:38.610110] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:56.758 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:56.758 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100751 00:23:56.758 17:23:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:57.325 17:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:57.325 17:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100751 00:23:57.325 17:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:57.893 17:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:57.893 17:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100751 00:23:57.893 17:23:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:58.152 17:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:58.152 17:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100751 00:23:58.152 17:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:58.728 17:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:58.728 17:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100751 00:23:58.728 17:23:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:59.304 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:59.304 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100751 00:23:59.304 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:23:59.563 Initializing NVMe Controllers 00:23:59.563 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:23:59.563 Controller IO queue size 128, less than required. 00:23:59.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:59.563 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:23:59.563 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:23:59.563 Initialization complete. Launching workers. 00:23:59.563 ======================================================== 00:23:59.563 Latency(us) 00:23:59.563 Device Information : IOPS MiB/s Average min max 00:23:59.563 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003918.01 1000162.74 1012661.09 00:23:59.563 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006274.74 1000199.40 1015972.18 00:23:59.563 ======================================================== 00:23:59.563 Total : 256.00 0.12 1005096.38 1000162.74 1015972.18 00:23:59.563 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100751 00:23:59.822 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (100751) - No such process 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 100751 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.822 17:23:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.822 rmmod nvme_tcp 00:23:59.822 rmmod nvme_fabrics 00:23:59.822 rmmod nvme_keyring 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 100673 ']' 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 100673 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 100673 ']' 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 100673 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100673 00:23:59.822 killing process with pid 100673 00:23:59.822 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:59.823 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:59.823 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100673' 00:23:59.823 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 100673 00:23:59.823 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 100673 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:00.081 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:24:00.340 00:24:00.340 real 0m8.865s 00:24:00.340 user 0m23.986s 00:24:00.340 sys 0m2.411s 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.340 ************************************ 00:24:00.340 END TEST nvmf_delete_subsystem 00:24:00.340 ************************************ 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:00.340 ************************************ 00:24:00.340 START TEST nvmf_host_management 00:24:00.340 ************************************ 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:24:00.340 * Looking for test storage... 00:24:00.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:24:00.340 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:00.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.600 --rc genhtml_branch_coverage=1 00:24:00.600 --rc genhtml_function_coverage=1 00:24:00.600 --rc genhtml_legend=1 00:24:00.600 --rc geninfo_all_blocks=1 00:24:00.600 --rc geninfo_unexecuted_blocks=1 00:24:00.600 00:24:00.600 ' 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:00.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.600 --rc genhtml_branch_coverage=1 00:24:00.600 --rc genhtml_function_coverage=1 00:24:00.600 --rc genhtml_legend=1 00:24:00.600 --rc geninfo_all_blocks=1 00:24:00.600 --rc geninfo_unexecuted_blocks=1 00:24:00.600 00:24:00.600 ' 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:00.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.600 --rc genhtml_branch_coverage=1 00:24:00.600 --rc genhtml_function_coverage=1 00:24:00.600 --rc genhtml_legend=1 00:24:00.600 --rc geninfo_all_blocks=1 00:24:00.600 --rc geninfo_unexecuted_blocks=1 00:24:00.600 00:24:00.600 ' 00:24:00.600 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:00.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.600 --rc genhtml_branch_coverage=1 00:24:00.600 --rc genhtml_function_coverage=1 00:24:00.600 --rc genhtml_legend=1 00:24:00.600 --rc geninfo_all_blocks=1 00:24:00.601 --rc geninfo_unexecuted_blocks=1 00:24:00.601 00:24:00.601 ' 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:00.601 Cannot find device "nvmf_init_br" 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:00.601 Cannot find device "nvmf_init_br2" 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:00.601 Cannot find device "nvmf_tgt_br" 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:24:00.601 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:00.602 Cannot find device "nvmf_tgt_br2" 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:00.602 Cannot find device "nvmf_init_br" 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:00.602 Cannot find device "nvmf_init_br2" 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:00.602 Cannot find device "nvmf_tgt_br" 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:00.602 Cannot find device "nvmf_tgt_br2" 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:00.602 Cannot find device "nvmf_br" 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:00.602 Cannot find device "nvmf_init_if" 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:00.602 Cannot find device "nvmf_init_if2" 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:00.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:00.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:00.602 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:00.860 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:00.860 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:00.860 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:00.860 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:00.861 17:23:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:00.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:00.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:24:00.861 00:24:00.861 --- 10.0.0.3 ping statistics --- 00:24:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.861 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:00.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:00.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:24:00.861 00:24:00.861 --- 10.0.0.4 ping statistics --- 00:24:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.861 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:00.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:24:00.861 00:24:00.861 --- 10.0.0.1 ping statistics --- 00:24:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.861 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:00.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:24:00.861 00:24:00.861 --- 10.0.0.2 ping statistics --- 00:24:00.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.861 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=101041 00:24:00.861 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 101041 00:24:00.862 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101041 ']' 00:24:00.862 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.862 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.862 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.862 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.862 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.120 [2024-12-06 17:23:43.195533] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:01.120 [2024-12-06 17:23:43.196812] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:24:01.120 [2024-12-06 17:23:43.196880] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.120 [2024-12-06 17:23:43.352377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:01.120 [2024-12-06 17:23:43.385694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.120 [2024-12-06 17:23:43.385746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.120 [2024-12-06 17:23:43.385757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.120 [2024-12-06 17:23:43.385765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.120 [2024-12-06 17:23:43.385772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.120 [2024-12-06 17:23:43.386657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.120 [2024-12-06 17:23:43.386797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.120 [2024-12-06 17:23:43.386875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:01.120 [2024-12-06 17:23:43.386882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.393 [2024-12-06 17:23:43.437623] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:01.393 [2024-12-06 17:23:43.438191] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:01.393 [2024-12-06 17:23:43.438466] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:24:01.393 [2024-12-06 17:23:43.438475] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:01.393 [2024-12-06 17:23:43.438854] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.393 [2024-12-06 17:23:43.508374] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.393 Malloc0 00:24:01.393 [2024-12-06 17:23:43.584734] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:01.393 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=101094 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 101094 /var/tmp/bdevperf.sock 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 101094 ']' 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:01.394 { 00:24:01.394 "params": { 00:24:01.394 "name": "Nvme$subsystem", 00:24:01.394 "trtype": "$TEST_TRANSPORT", 00:24:01.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:01.394 "adrfam": "ipv4", 00:24:01.394 "trsvcid": "$NVMF_PORT", 00:24:01.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:01.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:01.394 "hdgst": ${hdgst:-false}, 00:24:01.394 "ddgst": ${ddgst:-false} 00:24:01.394 }, 00:24:01.394 "method": "bdev_nvme_attach_controller" 00:24:01.394 } 00:24:01.394 EOF 00:24:01.394 )") 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:24:01.394 17:23:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:01.394 "params": { 00:24:01.394 "name": "Nvme0", 00:24:01.394 "trtype": "tcp", 00:24:01.394 "traddr": "10.0.0.3", 00:24:01.394 "adrfam": "ipv4", 00:24:01.394 "trsvcid": "4420", 00:24:01.394 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:01.394 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:01.394 "hdgst": false, 00:24:01.394 "ddgst": false 00:24:01.394 }, 00:24:01.394 "method": "bdev_nvme_attach_controller" 00:24:01.394 }' 00:24:01.653 [2024-12-06 17:23:43.689707] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:24:01.653 [2024-12-06 17:23:43.689793] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101094 ] 00:24:01.653 [2024-12-06 17:23:43.837984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.653 [2024-12-06 17:23:43.870813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.911 Running I/O for 10 seconds... 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:24:01.911 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=553 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 553 -ge 100 ']' 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.172 [2024-12-06 17:23:44.432230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb72a0 is same with the state(6) to be set 00:24:02.172 [2024-12-06 17:23:44.432295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb72a0 is same with the state(6) to be set 00:24:02.172 [2024-12-06 17:23:44.432309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcb72a0 is same with the state(6) to be set 00:24:02.172 [2024-12-06 17:23:44.432926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.172 [2024-12-06 17:23:44.432963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.172 [2024-12-06 17:23:44.432977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.172 [2024-12-06 17:23:44.432987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.172 [2024-12-06 17:23:44.432997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.172 [2024-12-06 17:23:44.433007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.172 [2024-12-06 17:23:44.433017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.172 [2024-12-06 17:23:44.433027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.172 [2024-12-06 17:23:44.433036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1575130 is same with the state(6) to be set 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.172 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:02.172 [2024-12-06 17:23:44.442135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.172 [2024-12-06 17:23:44.442171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.172 [2024-12-06 17:23:44.442193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.172 [2024-12-06 17:23:44.442204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.172 [2024-12-06 17:23:44.442216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.172 [2024-12-06 17:23:44.442226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.172 [2024-12-06 17:23:44.442237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.173 [2024-12-06 17:23:44.442893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.173 [2024-12-06 17:23:44.442905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.442914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.442925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.442935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.442946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.442956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.442967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.442980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.442997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.174 [2024-12-06 17:23:44.443561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.174 [2024-12-06 17:23:44.443572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:02.175 [2024-12-06 17:23:44.443581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.175 [2024-12-06 17:23:44.443682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575130 (9): Bad file descriptor 00:24:02.175 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.175 [2024-12-06 17:23:44.444878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:02.175 17:23:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:24:02.175 task offset: 81920 on job bdev=Nvme0n1 fails 00:24:02.175 00:24:02.175 Latency(us) 00:24:02.175 [2024-12-06T17:23:44.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:02.175 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:02.175 Job: Nvme0n1 ended in about 0.44 seconds with error 00:24:02.175 Verification LBA range: start 0x0 length 0x400 00:24:02.175 Nvme0n1 : 0.44 1458.08 91.13 145.81 0.00 38550.26 1906.50 37891.72 00:24:02.175 [2024-12-06T17:23:44.473Z] =================================================================================================================== 00:24:02.175 [2024-12-06T17:23:44.473Z] Total : 1458.08 91.13 145.81 0.00 38550.26 1906.50 37891.72 00:24:02.175 [2024-12-06 17:23:44.446886] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:02.175 [2024-12-06 17:23:44.449681] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 101094 00:24:03.548 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (101094) - No such process 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.548 { 00:24:03.548 "params": { 00:24:03.548 "name": "Nvme$subsystem", 00:24:03.548 "trtype": "$TEST_TRANSPORT", 00:24:03.548 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.548 "adrfam": "ipv4", 00:24:03.548 "trsvcid": "$NVMF_PORT", 00:24:03.548 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.548 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.548 "hdgst": ${hdgst:-false}, 00:24:03.548 "ddgst": ${ddgst:-false} 00:24:03.548 }, 00:24:03.548 "method": "bdev_nvme_attach_controller" 00:24:03.548 } 00:24:03.548 EOF 00:24:03.548 )") 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:24:03.548 17:23:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:03.548 "params": { 00:24:03.548 "name": "Nvme0", 00:24:03.548 "trtype": "tcp", 00:24:03.548 "traddr": "10.0.0.3", 00:24:03.548 "adrfam": "ipv4", 00:24:03.548 "trsvcid": "4420", 00:24:03.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.548 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:03.548 "hdgst": false, 00:24:03.548 "ddgst": false 00:24:03.548 }, 00:24:03.548 "method": "bdev_nvme_attach_controller" 00:24:03.548 }' 00:24:03.548 [2024-12-06 17:23:45.500737] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:24:03.548 [2024-12-06 17:23:45.500813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101140 ] 00:24:03.548 [2024-12-06 17:23:45.644615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.548 [2024-12-06 17:23:45.679188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.548 Running I/O for 1 seconds... 00:24:04.922 1536.00 IOPS, 96.00 MiB/s 00:24:04.922 Latency(us) 00:24:04.922 [2024-12-06T17:23:47.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.922 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.922 Verification LBA range: start 0x0 length 0x400 00:24:04.922 Nvme0n1 : 1.02 1572.62 98.29 0.00 0.00 39875.22 4885.41 37176.79 00:24:04.922 [2024-12-06T17:23:47.220Z] =================================================================================================================== 00:24:04.922 [2024-12-06T17:23:47.220Z] Total : 1572.62 98.29 0.00 0.00 39875.22 4885.41 37176.79 00:24:04.922 17:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:24:04.922 17:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:24:04.922 17:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:24:04.922 17:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:24:04.922 17:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:24:04.922 17:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.922 17:23:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:24:04.922 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.922 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:24:04.922 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.923 rmmod nvme_tcp 00:24:04.923 rmmod nvme_fabrics 00:24:04.923 rmmod nvme_keyring 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 101041 ']' 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 101041 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 101041 ']' 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 101041 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101041 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.923 killing process with pid 101041 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101041' 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 101041 00:24:04.923 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 101041 00:24:05.181 [2024-12-06 17:23:47.230260] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.181 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:24:05.440 00:24:05.440 real 0m5.011s 00:24:05.440 user 0m15.854s 00:24:05.440 sys 0m2.269s 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:24:05.440 ************************************ 00:24:05.440 END TEST nvmf_host_management 00:24:05.440 ************************************ 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.440 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:05.441 ************************************ 00:24:05.441 START TEST nvmf_lvol 00:24:05.441 ************************************ 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:24:05.441 * Looking for test storage... 00:24:05.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:05.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.441 --rc genhtml_branch_coverage=1 00:24:05.441 --rc genhtml_function_coverage=1 00:24:05.441 --rc genhtml_legend=1 00:24:05.441 --rc geninfo_all_blocks=1 00:24:05.441 --rc geninfo_unexecuted_blocks=1 00:24:05.441 00:24:05.441 ' 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:05.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.441 --rc genhtml_branch_coverage=1 00:24:05.441 --rc genhtml_function_coverage=1 00:24:05.441 --rc genhtml_legend=1 00:24:05.441 --rc geninfo_all_blocks=1 00:24:05.441 --rc geninfo_unexecuted_blocks=1 00:24:05.441 00:24:05.441 ' 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:05.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.441 --rc genhtml_branch_coverage=1 00:24:05.441 --rc genhtml_function_coverage=1 00:24:05.441 --rc genhtml_legend=1 00:24:05.441 --rc geninfo_all_blocks=1 00:24:05.441 --rc geninfo_unexecuted_blocks=1 00:24:05.441 00:24:05.441 ' 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:05.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:05.441 --rc genhtml_branch_coverage=1 00:24:05.441 --rc genhtml_function_coverage=1 00:24:05.441 --rc genhtml_legend=1 00:24:05.441 --rc geninfo_all_blocks=1 00:24:05.441 --rc geninfo_unexecuted_blocks=1 00:24:05.441 00:24:05.441 ' 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:05.441 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.701 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:05.702 Cannot find device "nvmf_init_br" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:05.702 Cannot find device "nvmf_init_br2" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:05.702 Cannot find device "nvmf_tgt_br" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:05.702 Cannot find device "nvmf_tgt_br2" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:05.702 Cannot find device "nvmf_init_br" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:05.702 Cannot find device "nvmf_init_br2" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:05.702 Cannot find device "nvmf_tgt_br" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:05.702 Cannot find device "nvmf_tgt_br2" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:05.702 Cannot find device "nvmf_br" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:05.702 Cannot find device "nvmf_init_if" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:05.702 Cannot find device "nvmf_init_if2" 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:05.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:05.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:05.702 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:05.703 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:05.962 17:23:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:05.962 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:05.962 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:24:05.962 00:24:05.962 --- 10.0.0.3 ping statistics --- 00:24:05.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.962 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:05.962 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:05.962 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:24:05.962 00:24:05.962 --- 10.0.0.4 ping statistics --- 00:24:05.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.962 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:05.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:24:05.962 00:24:05.962 --- 10.0.0.1 ping statistics --- 00:24:05.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.962 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:05.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.083 ms 00:24:05.962 00:24:05.962 --- 10.0.0.2 ping statistics --- 00:24:05.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.962 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=101411 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 101411 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 101411 ']' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:24:05.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.962 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:05.962 [2024-12-06 17:23:48.192022] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:05.962 [2024-12-06 17:23:48.193081] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:24:05.962 [2024-12-06 17:23:48.193142] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.222 [2024-12-06 17:23:48.339732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:06.222 [2024-12-06 17:23:48.378398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.222 [2024-12-06 17:23:48.378459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.222 [2024-12-06 17:23:48.378473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.222 [2024-12-06 17:23:48.378483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.222 [2024-12-06 17:23:48.378492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.222 [2024-12-06 17:23:48.379572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.222 [2024-12-06 17:23:48.379680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.222 [2024-12-06 17:23:48.379686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.222 [2024-12-06 17:23:48.437577] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:06.222 [2024-12-06 17:23:48.438511] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:06.222 [2024-12-06 17:23:48.438539] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:24:06.222 [2024-12-06 17:23:48.438587] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:24:06.222 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.222 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:24:06.222 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:06.222 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.222 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:06.222 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.222 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:06.481 [2024-12-06 17:23:48.765233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.740 17:23:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:06.999 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:24:06.999 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.275 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:24:07.275 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:24:07.540 17:23:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:24:08.107 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ba2d3bb0-aec1-4cc0-9b9e-242db6184d9f 00:24:08.107 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ba2d3bb0-aec1-4cc0-9b9e-242db6184d9f lvol 20 00:24:08.367 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1bb1f9c5-48c3-441a-aa8b-a9c1df27a984 00:24:08.367 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:08.624 17:23:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1bb1f9c5-48c3-441a-aa8b-a9c1df27a984 00:24:08.882 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:09.139 [2024-12-06 17:23:51.397179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:09.139 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:09.397 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101545 00:24:09.397 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:24:09.397 17:23:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:24:10.771 17:23:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 1bb1f9c5-48c3-441a-aa8b-a9c1df27a984 MY_SNAPSHOT 00:24:10.771 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a0468aff-24eb-44da-9eb3-e40cdb817258 00:24:10.771 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 1bb1f9c5-48c3-441a-aa8b-a9c1df27a984 30 00:24:11.338 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a0468aff-24eb-44da-9eb3-e40cdb817258 MY_CLONE 00:24:11.596 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4b9f4521-2a97-4ab7-bfa0-b01dfca905fb 00:24:11.596 17:23:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 4b9f4521-2a97-4ab7-bfa0-b01dfca905fb 00:24:12.161 17:23:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101545 00:24:20.269 Initializing NVMe Controllers 00:24:20.269 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:24:20.269 Controller IO queue size 128, less than required. 00:24:20.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.269 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:24:20.269 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:24:20.269 Initialization complete. Launching workers. 00:24:20.269 ======================================================== 00:24:20.269 Latency(us) 00:24:20.269 Device Information : IOPS MiB/s Average min max 00:24:20.269 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10651.90 41.61 12025.42 4683.19 74951.13 00:24:20.269 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10409.30 40.66 12303.73 5977.04 63072.97 00:24:20.269 ======================================================== 00:24:20.269 Total : 21061.20 82.27 12162.97 4683.19 74951.13 00:24:20.269 00:24:20.269 17:24:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:20.269 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1bb1f9c5-48c3-441a-aa8b-a9c1df27a984 00:24:20.269 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ba2d3bb0-aec1-4cc0-9b9e-242db6184d9f 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.528 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.528 rmmod nvme_tcp 00:24:20.528 rmmod nvme_fabrics 00:24:20.786 rmmod nvme_keyring 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 101411 ']' 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 101411 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 101411 ']' 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 101411 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101411 00:24:20.786 killing process with pid 101411 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101411' 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 101411 00:24:20.786 17:24:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 101411 00:24:20.786 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.786 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.787 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.787 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:24:20.787 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.787 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:24:20.787 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.787 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.787 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:20.787 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:21.045 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:24:21.046 ************************************ 00:24:21.046 END TEST nvmf_lvol 00:24:21.046 ************************************ 00:24:21.046 00:24:21.046 real 0m15.738s 00:24:21.046 user 0m56.357s 00:24:21.046 sys 0m5.731s 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.046 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:24:21.306 ************************************ 00:24:21.306 START TEST nvmf_lvs_grow 00:24:21.306 ************************************ 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:24:21.306 * Looking for test storage... 00:24:21.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:21.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.306 --rc genhtml_branch_coverage=1 00:24:21.306 --rc genhtml_function_coverage=1 00:24:21.306 --rc genhtml_legend=1 00:24:21.306 --rc geninfo_all_blocks=1 00:24:21.306 --rc geninfo_unexecuted_blocks=1 00:24:21.306 00:24:21.306 ' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:21.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.306 --rc genhtml_branch_coverage=1 00:24:21.306 --rc genhtml_function_coverage=1 00:24:21.306 --rc genhtml_legend=1 00:24:21.306 --rc geninfo_all_blocks=1 00:24:21.306 --rc geninfo_unexecuted_blocks=1 00:24:21.306 00:24:21.306 ' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:21.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.306 --rc genhtml_branch_coverage=1 00:24:21.306 --rc genhtml_function_coverage=1 00:24:21.306 --rc genhtml_legend=1 00:24:21.306 --rc geninfo_all_blocks=1 00:24:21.306 --rc geninfo_unexecuted_blocks=1 00:24:21.306 00:24:21.306 ' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:21.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:21.306 --rc genhtml_branch_coverage=1 00:24:21.306 --rc genhtml_function_coverage=1 00:24:21.306 --rc genhtml_legend=1 00:24:21.306 --rc geninfo_all_blocks=1 00:24:21.306 --rc geninfo_unexecuted_blocks=1 00:24:21.306 00:24:21.306 ' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.306 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:21.307 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:21.566 Cannot find device "nvmf_init_br" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:21.566 Cannot find device "nvmf_init_br2" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:21.566 Cannot find device "nvmf_tgt_br" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.566 Cannot find device "nvmf_tgt_br2" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:21.566 Cannot find device "nvmf_init_br" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:21.566 Cannot find device "nvmf_init_br2" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:21.566 Cannot find device "nvmf_tgt_br" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:21.566 Cannot find device "nvmf_tgt_br2" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:21.566 Cannot find device "nvmf_br" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:21.566 Cannot find device "nvmf_init_if" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:21.566 Cannot find device "nvmf_init_if2" 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:24:21.566 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.566 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:21.567 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:21.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:21.826 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:24:21.826 00:24:21.826 --- 10.0.0.3 ping statistics --- 00:24:21.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.826 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:21.826 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:21.826 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:24:21.826 00:24:21.826 --- 10.0.0.4 ping statistics --- 00:24:21.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.826 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:21.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:24:21.826 00:24:21.826 --- 10.0.0.1 ping statistics --- 00:24:21.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.826 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:21.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:24:21.826 00:24:21.826 --- 10.0.0.2 ping statistics --- 00:24:21.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.826 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.826 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=101948 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 101948 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 101948 ']' 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.827 17:24:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:21.827 [2024-12-06 17:24:04.053098] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:21.827 [2024-12-06 17:24:04.054349] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:24:21.827 [2024-12-06 17:24:04.054423] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.085 [2024-12-06 17:24:04.205738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.086 [2024-12-06 17:24:04.243886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.086 [2024-12-06 17:24:04.243949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.086 [2024-12-06 17:24:04.243962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.086 [2024-12-06 17:24:04.243972] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.086 [2024-12-06 17:24:04.243982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.086 [2024-12-06 17:24:04.244341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.086 [2024-12-06 17:24:04.298467] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:22.086 [2024-12-06 17:24:04.298838] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:22.086 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.086 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:24:22.086 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.086 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.086 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:22.086 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.086 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:22.344 [2024-12-06 17:24:04.613116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.344 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:24:22.344 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:22.344 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:22.344 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:22.603 ************************************ 00:24:22.603 START TEST lvs_grow_clean 00:24:22.603 ************************************ 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:22.603 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:22.862 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:22.862 17:24:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:23.119 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:23.120 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:23.120 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:23.685 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:23.685 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:23.685 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ced1b800-1811-4af5-8499-d3565ed79cb8 lvol 150 00:24:23.943 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=869e28e9-0485-4ee6-8386-00517dcd1733 00:24:23.943 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:23.943 17:24:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:24.201 [2024-12-06 17:24:06.244875] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:24.201 [2024-12-06 17:24:06.244995] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:24.201 true 00:24:24.201 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:24.201 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:24.459 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:24.459 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:24.717 17:24:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 869e28e9-0485-4ee6-8386-00517dcd1733 00:24:24.975 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:25.541 [2024-12-06 17:24:07.637202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:25.542 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102102 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102102 /var/tmp/bdevperf.sock 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 102102 ']' 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:25.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:25.800 17:24:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:25.800 [2024-12-06 17:24:08.041453] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:24:25.800 [2024-12-06 17:24:08.042059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102102 ] 00:24:26.059 [2024-12-06 17:24:08.185860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.059 [2024-12-06 17:24:08.219473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.059 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.059 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:24:26.059 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:26.626 Nvme0n1 00:24:26.626 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:26.626 [ 00:24:26.626 { 00:24:26.626 "aliases": [ 00:24:26.626 "869e28e9-0485-4ee6-8386-00517dcd1733" 00:24:26.626 ], 00:24:26.626 "assigned_rate_limits": { 00:24:26.626 "r_mbytes_per_sec": 0, 00:24:26.626 "rw_ios_per_sec": 0, 00:24:26.626 "rw_mbytes_per_sec": 0, 00:24:26.626 "w_mbytes_per_sec": 0 00:24:26.626 }, 00:24:26.626 "block_size": 4096, 00:24:26.626 "claimed": false, 00:24:26.626 "driver_specific": { 00:24:26.626 "mp_policy": "active_passive", 00:24:26.626 "nvme": [ 00:24:26.626 { 00:24:26.626 "ctrlr_data": { 00:24:26.626 "ana_reporting": false, 00:24:26.626 "cntlid": 1, 00:24:26.626 "firmware_revision": "25.01", 00:24:26.626 "model_number": "SPDK bdev Controller", 00:24:26.626 "multi_ctrlr": true, 00:24:26.626 "oacs": { 00:24:26.626 "firmware": 0, 00:24:26.626 "format": 0, 00:24:26.626 "ns_manage": 0, 00:24:26.626 "security": 0 00:24:26.626 }, 00:24:26.626 "serial_number": "SPDK0", 00:24:26.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.626 "vendor_id": "0x8086" 00:24:26.626 }, 00:24:26.626 "ns_data": { 00:24:26.626 "can_share": true, 00:24:26.626 "id": 1 00:24:26.626 }, 00:24:26.626 "trid": { 00:24:26.626 "adrfam": "IPv4", 00:24:26.626 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:26.626 "traddr": "10.0.0.3", 00:24:26.626 "trsvcid": "4420", 00:24:26.626 "trtype": "TCP" 00:24:26.626 }, 00:24:26.626 "vs": { 00:24:26.626 "nvme_version": "1.3" 00:24:26.626 } 00:24:26.626 } 00:24:26.626 ] 00:24:26.626 }, 00:24:26.626 "memory_domains": [ 00:24:26.626 { 00:24:26.626 "dma_device_id": "system", 00:24:26.626 "dma_device_type": 1 00:24:26.626 } 00:24:26.626 ], 00:24:26.626 "name": "Nvme0n1", 00:24:26.626 "num_blocks": 38912, 00:24:26.626 "numa_id": -1, 00:24:26.626 "product_name": "NVMe disk", 00:24:26.626 "supported_io_types": { 00:24:26.626 "abort": true, 00:24:26.626 "compare": true, 00:24:26.626 "compare_and_write": true, 00:24:26.626 "copy": true, 00:24:26.626 "flush": true, 00:24:26.626 "get_zone_info": false, 00:24:26.626 "nvme_admin": true, 00:24:26.626 "nvme_io": true, 00:24:26.626 "nvme_io_md": false, 00:24:26.626 "nvme_iov_md": false, 00:24:26.626 "read": true, 00:24:26.626 "reset": true, 00:24:26.626 "seek_data": false, 00:24:26.626 "seek_hole": false, 00:24:26.626 "unmap": true, 00:24:26.626 "write": true, 00:24:26.626 "write_zeroes": true, 00:24:26.626 "zcopy": false, 00:24:26.626 "zone_append": false, 00:24:26.626 "zone_management": false 00:24:26.626 }, 00:24:26.626 "uuid": "869e28e9-0485-4ee6-8386-00517dcd1733", 00:24:26.626 "zoned": false 00:24:26.626 } 00:24:26.626 ] 00:24:26.626 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102136 00:24:26.626 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:26.626 17:24:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:26.884 Running I/O for 10 seconds... 00:24:27.815 Latency(us) 00:24:27.815 [2024-12-06T17:24:10.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:27.815 Nvme0n1 : 1.00 6772.00 26.45 0.00 0.00 0.00 0.00 0.00 00:24:27.815 [2024-12-06T17:24:10.113Z] =================================================================================================================== 00:24:27.815 [2024-12-06T17:24:10.113Z] Total : 6772.00 26.45 0.00 0.00 0.00 0.00 0.00 00:24:27.815 00:24:28.767 17:24:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:28.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:28.767 Nvme0n1 : 2.00 7069.00 27.61 0.00 0.00 0.00 0.00 0.00 00:24:28.767 [2024-12-06T17:24:11.065Z] =================================================================================================================== 00:24:28.767 [2024-12-06T17:24:11.065Z] Total : 7069.00 27.61 0.00 0.00 0.00 0.00 0.00 00:24:28.767 00:24:29.034 true 00:24:29.034 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:29.034 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:29.599 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:29.599 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:29.599 17:24:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 102136 00:24:29.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:29.857 Nvme0n1 : 3.00 7263.67 28.37 0.00 0.00 0.00 0.00 0.00 00:24:29.857 [2024-12-06T17:24:12.155Z] =================================================================================================================== 00:24:29.857 [2024-12-06T17:24:12.155Z] Total : 7263.67 28.37 0.00 0.00 0.00 0.00 0.00 00:24:29.857 00:24:30.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:30.791 Nvme0n1 : 4.00 7385.00 28.85 0.00 0.00 0.00 0.00 0.00 00:24:30.791 [2024-12-06T17:24:13.089Z] =================================================================================================================== 00:24:30.791 [2024-12-06T17:24:13.089Z] Total : 7385.00 28.85 0.00 0.00 0.00 0.00 0.00 00:24:30.791 00:24:31.727 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:31.727 Nvme0n1 : 5.00 7427.60 29.01 0.00 0.00 0.00 0.00 0.00 00:24:31.727 [2024-12-06T17:24:14.025Z] =================================================================================================================== 00:24:31.727 [2024-12-06T17:24:14.025Z] Total : 7427.60 29.01 0.00 0.00 0.00 0.00 0.00 00:24:31.727 00:24:33.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:33.105 Nvme0n1 : 6.00 7436.50 29.05 0.00 0.00 0.00 0.00 0.00 00:24:33.105 [2024-12-06T17:24:15.403Z] =================================================================================================================== 00:24:33.105 [2024-12-06T17:24:15.403Z] Total : 7436.50 29.05 0.00 0.00 0.00 0.00 0.00 00:24:33.105 00:24:34.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:34.038 Nvme0n1 : 7.00 7443.00 29.07 0.00 0.00 0.00 0.00 0.00 00:24:34.038 [2024-12-06T17:24:16.336Z] =================================================================================================================== 00:24:34.038 [2024-12-06T17:24:16.336Z] Total : 7443.00 29.07 0.00 0.00 0.00 0.00 0.00 00:24:34.038 00:24:34.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:34.974 Nvme0n1 : 8.00 7448.62 29.10 0.00 0.00 0.00 0.00 0.00 00:24:34.974 [2024-12-06T17:24:17.272Z] =================================================================================================================== 00:24:34.974 [2024-12-06T17:24:17.272Z] Total : 7448.62 29.10 0.00 0.00 0.00 0.00 0.00 00:24:34.974 00:24:35.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:35.909 Nvme0n1 : 9.00 7458.89 29.14 0.00 0.00 0.00 0.00 0.00 00:24:35.909 [2024-12-06T17:24:18.207Z] =================================================================================================================== 00:24:35.909 [2024-12-06T17:24:18.207Z] Total : 7458.89 29.14 0.00 0.00 0.00 0.00 0.00 00:24:35.909 00:24:36.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:36.845 Nvme0n1 : 10.00 7459.80 29.14 0.00 0.00 0.00 0.00 0.00 00:24:36.845 [2024-12-06T17:24:19.143Z] =================================================================================================================== 00:24:36.845 [2024-12-06T17:24:19.143Z] Total : 7459.80 29.14 0.00 0.00 0.00 0.00 0.00 00:24:36.845 00:24:36.845 00:24:36.845 Latency(us) 00:24:36.845 [2024-12-06T17:24:19.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:36.845 Nvme0n1 : 10.01 7462.47 29.15 0.00 0.00 17140.63 6702.55 62437.93 00:24:36.845 [2024-12-06T17:24:19.143Z] =================================================================================================================== 00:24:36.845 [2024-12-06T17:24:19.143Z] Total : 7462.47 29.15 0.00 0.00 17140.63 6702.55 62437.93 00:24:36.845 { 00:24:36.845 "results": [ 00:24:36.845 { 00:24:36.845 "job": "Nvme0n1", 00:24:36.845 "core_mask": "0x2", 00:24:36.845 "workload": "randwrite", 00:24:36.845 "status": "finished", 00:24:36.845 "queue_depth": 128, 00:24:36.845 "io_size": 4096, 00:24:36.845 "runtime": 10.013575, 00:24:36.845 "iops": 7462.469697385799, 00:24:36.845 "mibps": 29.150272255413277, 00:24:36.845 "io_failed": 0, 00:24:36.845 "io_timeout": 0, 00:24:36.845 "avg_latency_us": 17140.633381493113, 00:24:36.845 "min_latency_us": 6702.545454545455, 00:24:36.845 "max_latency_us": 62437.93454545455 00:24:36.845 } 00:24:36.845 ], 00:24:36.845 "core_count": 1 00:24:36.845 } 00:24:36.845 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102102 00:24:36.845 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 102102 ']' 00:24:36.845 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 102102 00:24:36.845 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:24:36.845 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.845 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102102 00:24:36.845 killing process with pid 102102 00:24:36.845 Received shutdown signal, test time was about 10.000000 seconds 00:24:36.845 00:24:36.845 Latency(us) 00:24:36.845 [2024-12-06T17:24:19.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.845 [2024-12-06T17:24:19.143Z] =================================================================================================================== 00:24:36.845 [2024-12-06T17:24:19.144Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:36.846 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:36.846 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:36.846 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102102' 00:24:36.846 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 102102 00:24:36.846 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 102102 00:24:37.104 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:37.363 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:37.637 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:37.637 17:24:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:38.216 [2024-12-06 17:24:20.460943] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:38.216 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:38.475 2024/12/06 17:24:20 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:ced1b800-1811-4af5-8499-d3565ed79cb8], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:24:38.475 request: 00:24:38.475 { 00:24:38.475 "method": "bdev_lvol_get_lvstores", 00:24:38.475 "params": { 00:24:38.475 "uuid": "ced1b800-1811-4af5-8499-d3565ed79cb8" 00:24:38.475 } 00:24:38.475 } 00:24:38.475 Got JSON-RPC error response 00:24:38.475 GoRPCClient: error on JSON-RPC call 00:24:38.734 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:24:38.734 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:38.734 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:38.734 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:38.734 17:24:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:38.994 aio_bdev 00:24:38.994 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 869e28e9-0485-4ee6-8386-00517dcd1733 00:24:38.994 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=869e28e9-0485-4ee6-8386-00517dcd1733 00:24:38.994 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:38.994 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:24:38.994 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:38.994 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:38.994 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:39.253 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 869e28e9-0485-4ee6-8386-00517dcd1733 -t 2000 00:24:39.512 [ 00:24:39.513 { 00:24:39.513 "aliases": [ 00:24:39.513 "lvs/lvol" 00:24:39.513 ], 00:24:39.513 "assigned_rate_limits": { 00:24:39.513 "r_mbytes_per_sec": 0, 00:24:39.513 "rw_ios_per_sec": 0, 00:24:39.513 "rw_mbytes_per_sec": 0, 00:24:39.513 "w_mbytes_per_sec": 0 00:24:39.513 }, 00:24:39.513 "block_size": 4096, 00:24:39.513 "claimed": false, 00:24:39.513 "driver_specific": { 00:24:39.513 "lvol": { 00:24:39.513 "base_bdev": "aio_bdev", 00:24:39.513 "clone": false, 00:24:39.513 "esnap_clone": false, 00:24:39.513 "lvol_store_uuid": "ced1b800-1811-4af5-8499-d3565ed79cb8", 00:24:39.513 "num_allocated_clusters": 38, 00:24:39.513 "snapshot": false, 00:24:39.513 "thin_provision": false 00:24:39.513 } 00:24:39.513 }, 00:24:39.513 "name": "869e28e9-0485-4ee6-8386-00517dcd1733", 00:24:39.513 "num_blocks": 38912, 00:24:39.513 "product_name": "Logical Volume", 00:24:39.513 "supported_io_types": { 00:24:39.513 "abort": false, 00:24:39.513 "compare": false, 00:24:39.513 "compare_and_write": false, 00:24:39.513 "copy": false, 00:24:39.513 "flush": false, 00:24:39.513 "get_zone_info": false, 00:24:39.513 "nvme_admin": false, 00:24:39.513 "nvme_io": false, 00:24:39.513 "nvme_io_md": false, 00:24:39.513 "nvme_iov_md": false, 00:24:39.513 "read": true, 00:24:39.513 "reset": true, 00:24:39.513 "seek_data": true, 00:24:39.513 "seek_hole": true, 00:24:39.513 "unmap": true, 00:24:39.513 "write": true, 00:24:39.513 "write_zeroes": true, 00:24:39.513 "zcopy": false, 00:24:39.513 "zone_append": false, 00:24:39.513 "zone_management": false 00:24:39.513 }, 00:24:39.513 "uuid": "869e28e9-0485-4ee6-8386-00517dcd1733", 00:24:39.513 "zoned": false 00:24:39.513 } 00:24:39.513 ] 00:24:39.513 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:24:39.513 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:39.513 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:24:39.771 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:24:39.771 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:39.771 17:24:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:24:40.030 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:24:40.030 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 869e28e9-0485-4ee6-8386-00517dcd1733 00:24:40.288 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ced1b800-1811-4af5-8499-d3565ed79cb8 00:24:40.856 17:24:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:40.856 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:41.425 ************************************ 00:24:41.425 END TEST lvs_grow_clean 00:24:41.425 ************************************ 00:24:41.425 00:24:41.425 real 0m18.852s 00:24:41.425 user 0m18.195s 00:24:41.425 sys 0m2.127s 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:24:41.425 ************************************ 00:24:41.425 START TEST lvs_grow_dirty 00:24:41.425 ************************************ 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:41.425 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:41.684 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:24:41.684 17:24:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:24:41.942 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:41.942 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:24:41.943 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:42.510 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:24:42.510 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:24:42.510 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e976eab1-560a-41dc-b77f-a552fd0a08c5 lvol 150 00:24:42.768 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f4f93a6-98f4-4135-99eb-53d1477f639a 00:24:42.768 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:24:42.768 17:24:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:24:43.026 [2024-12-06 17:24:25.120910] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:24:43.026 [2024-12-06 17:24:25.121036] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:24:43.026 true 00:24:43.026 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:43.026 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:24:43.283 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:24:43.283 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:24:43.542 17:24:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f4f93a6-98f4-4135-99eb-53d1477f639a 00:24:43.800 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:44.058 [2024-12-06 17:24:26.341431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:44.317 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102538 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102538 /var/tmp/bdevperf.sock 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102538 ']' 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.575 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:44.575 [2024-12-06 17:24:26.735417] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:24:44.576 [2024-12-06 17:24:26.735516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102538 ] 00:24:44.834 [2024-12-06 17:24:26.879067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.834 [2024-12-06 17:24:26.912256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.834 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.834 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:24:44.834 17:24:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:24:45.093 Nvme0n1 00:24:45.093 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:24:45.352 [ 00:24:45.352 { 00:24:45.352 "aliases": [ 00:24:45.352 "5f4f93a6-98f4-4135-99eb-53d1477f639a" 00:24:45.352 ], 00:24:45.352 "assigned_rate_limits": { 00:24:45.352 "r_mbytes_per_sec": 0, 00:24:45.352 "rw_ios_per_sec": 0, 00:24:45.352 "rw_mbytes_per_sec": 0, 00:24:45.352 "w_mbytes_per_sec": 0 00:24:45.352 }, 00:24:45.352 "block_size": 4096, 00:24:45.352 "claimed": false, 00:24:45.352 "driver_specific": { 00:24:45.352 "mp_policy": "active_passive", 00:24:45.352 "nvme": [ 00:24:45.352 { 00:24:45.352 "ctrlr_data": { 00:24:45.352 "ana_reporting": false, 00:24:45.352 "cntlid": 1, 00:24:45.352 "firmware_revision": "25.01", 00:24:45.352 "model_number": "SPDK bdev Controller", 00:24:45.352 "multi_ctrlr": true, 00:24:45.352 "oacs": { 00:24:45.352 "firmware": 0, 00:24:45.352 "format": 0, 00:24:45.352 "ns_manage": 0, 00:24:45.352 "security": 0 00:24:45.352 }, 00:24:45.352 "serial_number": "SPDK0", 00:24:45.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.352 "vendor_id": "0x8086" 00:24:45.352 }, 00:24:45.352 "ns_data": { 00:24:45.352 "can_share": true, 00:24:45.352 "id": 1 00:24:45.352 }, 00:24:45.352 "trid": { 00:24:45.352 "adrfam": "IPv4", 00:24:45.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:45.352 "traddr": "10.0.0.3", 00:24:45.352 "trsvcid": "4420", 00:24:45.352 "trtype": "TCP" 00:24:45.352 }, 00:24:45.352 "vs": { 00:24:45.352 "nvme_version": "1.3" 00:24:45.352 } 00:24:45.352 } 00:24:45.352 ] 00:24:45.352 }, 00:24:45.352 "memory_domains": [ 00:24:45.352 { 00:24:45.352 "dma_device_id": "system", 00:24:45.352 "dma_device_type": 1 00:24:45.352 } 00:24:45.352 ], 00:24:45.352 "name": "Nvme0n1", 00:24:45.352 "num_blocks": 38912, 00:24:45.352 "numa_id": -1, 00:24:45.352 "product_name": "NVMe disk", 00:24:45.352 "supported_io_types": { 00:24:45.352 "abort": true, 00:24:45.352 "compare": true, 00:24:45.352 "compare_and_write": true, 00:24:45.352 "copy": true, 00:24:45.352 "flush": true, 00:24:45.352 "get_zone_info": false, 00:24:45.352 "nvme_admin": true, 00:24:45.352 "nvme_io": true, 00:24:45.352 "nvme_io_md": false, 00:24:45.352 "nvme_iov_md": false, 00:24:45.352 "read": true, 00:24:45.352 "reset": true, 00:24:45.352 "seek_data": false, 00:24:45.352 "seek_hole": false, 00:24:45.352 "unmap": true, 00:24:45.352 "write": true, 00:24:45.352 "write_zeroes": true, 00:24:45.352 "zcopy": false, 00:24:45.352 "zone_append": false, 00:24:45.352 "zone_management": false 00:24:45.352 }, 00:24:45.352 "uuid": "5f4f93a6-98f4-4135-99eb-53d1477f639a", 00:24:45.352 "zoned": false 00:24:45.352 } 00:24:45.352 ] 00:24:45.611 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102571 00:24:45.611 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:24:45.611 17:24:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.611 Running I/O for 10 seconds... 00:24:46.547 Latency(us) 00:24:46.547 [2024-12-06T17:24:28.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:46.547 Nvme0n1 : 1.00 8118.00 31.71 0.00 0.00 0.00 0.00 0.00 00:24:46.547 [2024-12-06T17:24:28.845Z] =================================================================================================================== 00:24:46.547 [2024-12-06T17:24:28.845Z] Total : 8118.00 31.71 0.00 0.00 0.00 0.00 0.00 00:24:46.547 00:24:47.482 17:24:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:47.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:47.792 Nvme0n1 : 2.00 8100.00 31.64 0.00 0.00 0.00 0.00 0.00 00:24:47.792 [2024-12-06T17:24:30.090Z] =================================================================================================================== 00:24:47.792 [2024-12-06T17:24:30.090Z] Total : 8100.00 31.64 0.00 0.00 0.00 0.00 0.00 00:24:47.792 00:24:47.792 true 00:24:47.792 17:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:47.792 17:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:24:48.357 17:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:24:48.357 17:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:24:48.357 17:24:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 102571 00:24:48.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:48.614 Nvme0n1 : 3.00 7987.67 31.20 0.00 0.00 0.00 0.00 0.00 00:24:48.614 [2024-12-06T17:24:30.912Z] =================================================================================================================== 00:24:48.614 [2024-12-06T17:24:30.912Z] Total : 7987.67 31.20 0.00 0.00 0.00 0.00 0.00 00:24:48.614 00:24:49.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:49.546 Nvme0n1 : 4.00 7935.50 31.00 0.00 0.00 0.00 0.00 0.00 00:24:49.546 [2024-12-06T17:24:31.844Z] =================================================================================================================== 00:24:49.546 [2024-12-06T17:24:31.844Z] Total : 7935.50 31.00 0.00 0.00 0.00 0.00 0.00 00:24:49.546 00:24:50.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:50.918 Nvme0n1 : 5.00 7870.00 30.74 0.00 0.00 0.00 0.00 0.00 00:24:50.918 [2024-12-06T17:24:33.216Z] =================================================================================================================== 00:24:50.918 [2024-12-06T17:24:33.216Z] Total : 7870.00 30.74 0.00 0.00 0.00 0.00 0.00 00:24:50.918 00:24:51.852 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:51.852 Nvme0n1 : 6.00 7813.33 30.52 0.00 0.00 0.00 0.00 0.00 00:24:51.852 [2024-12-06T17:24:34.150Z] =================================================================================================================== 00:24:51.852 [2024-12-06T17:24:34.150Z] Total : 7813.33 30.52 0.00 0.00 0.00 0.00 0.00 00:24:51.852 00:24:52.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:52.788 Nvme0n1 : 7.00 7385.14 28.85 0.00 0.00 0.00 0.00 0.00 00:24:52.788 [2024-12-06T17:24:35.086Z] =================================================================================================================== 00:24:52.788 [2024-12-06T17:24:35.086Z] Total : 7385.14 28.85 0.00 0.00 0.00 0.00 0.00 00:24:52.788 00:24:53.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:53.720 Nvme0n1 : 8.00 7322.88 28.60 0.00 0.00 0.00 0.00 0.00 00:24:53.720 [2024-12-06T17:24:36.018Z] =================================================================================================================== 00:24:53.720 [2024-12-06T17:24:36.018Z] Total : 7322.88 28.60 0.00 0.00 0.00 0.00 0.00 00:24:53.720 00:24:54.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:54.657 Nvme0n1 : 9.00 7289.11 28.47 0.00 0.00 0.00 0.00 0.00 00:24:54.657 [2024-12-06T17:24:36.955Z] =================================================================================================================== 00:24:54.657 [2024-12-06T17:24:36.955Z] Total : 7289.11 28.47 0.00 0.00 0.00 0.00 0.00 00:24:54.657 00:24:55.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:55.594 Nvme0n1 : 10.00 7261.10 28.36 0.00 0.00 0.00 0.00 0.00 00:24:55.594 [2024-12-06T17:24:37.892Z] =================================================================================================================== 00:24:55.594 [2024-12-06T17:24:37.892Z] Total : 7261.10 28.36 0.00 0.00 0.00 0.00 0.00 00:24:55.594 00:24:55.594 00:24:55.594 Latency(us) 00:24:55.594 [2024-12-06T17:24:37.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:55.594 Nvme0n1 : 10.01 7259.40 28.36 0.00 0.00 17618.60 8102.63 366048.35 00:24:55.594 [2024-12-06T17:24:37.892Z] =================================================================================================================== 00:24:55.594 [2024-12-06T17:24:37.892Z] Total : 7259.40 28.36 0.00 0.00 17618.60 8102.63 366048.35 00:24:55.594 { 00:24:55.594 "results": [ 00:24:55.594 { 00:24:55.594 "job": "Nvme0n1", 00:24:55.594 "core_mask": "0x2", 00:24:55.594 "workload": "randwrite", 00:24:55.594 "status": "finished", 00:24:55.594 "queue_depth": 128, 00:24:55.594 "io_size": 4096, 00:24:55.594 "runtime": 10.011298, 00:24:55.594 "iops": 7259.398331764773, 00:24:55.594 "mibps": 28.357024733456143, 00:24:55.594 "io_failed": 0, 00:24:55.594 "io_timeout": 0, 00:24:55.594 "avg_latency_us": 17618.59951906094, 00:24:55.594 "min_latency_us": 8102.632727272728, 00:24:55.594 "max_latency_us": 366048.3490909091 00:24:55.594 } 00:24:55.594 ], 00:24:55.594 "core_count": 1 00:24:55.594 } 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102538 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 102538 ']' 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 102538 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102538 00:24:55.594 killing process with pid 102538 00:24:55.594 Received shutdown signal, test time was about 10.000000 seconds 00:24:55.594 00:24:55.594 Latency(us) 00:24:55.594 [2024-12-06T17:24:37.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.594 [2024-12-06T17:24:37.892Z] =================================================================================================================== 00:24:55.594 [2024-12-06T17:24:37.892Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102538' 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 102538 00:24:55.594 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 102538 00:24:55.853 17:24:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:24:56.113 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:56.372 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:24:56.372 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:56.939 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:24:56.939 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:24:56.939 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 101948 00:24:56.939 17:24:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 101948 00:24:56.939 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 101948 Killed "${NVMF_APP[@]}" "$@" 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=102726 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 102726 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102726 ']' 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.939 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:56.939 [2024-12-06 17:24:39.083221] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:24:56.939 [2024-12-06 17:24:39.084303] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:24:56.939 [2024-12-06 17:24:39.084371] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.199 [2024-12-06 17:24:39.237951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.199 [2024-12-06 17:24:39.270498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.199 [2024-12-06 17:24:39.270561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.199 [2024-12-06 17:24:39.270574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.199 [2024-12-06 17:24:39.270582] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.199 [2024-12-06 17:24:39.270590] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.199 [2024-12-06 17:24:39.270889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.199 [2024-12-06 17:24:39.323163] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:57.199 [2024-12-06 17:24:39.323488] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:24:57.199 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.199 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:24:57.199 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:57.199 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:57.199 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:24:57.199 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:57.199 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:57.457 [2024-12-06 17:24:39.717022] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:57.457 [2024-12-06 17:24:39.717466] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:57.457 [2024-12-06 17:24:39.717742] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:57.716 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:24:57.716 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5f4f93a6-98f4-4135-99eb-53d1477f639a 00:24:57.716 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5f4f93a6-98f4-4135-99eb-53d1477f639a 00:24:57.716 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:57.716 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:24:57.716 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:57.716 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:57.716 17:24:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:57.974 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f4f93a6-98f4-4135-99eb-53d1477f639a -t 2000 00:24:58.254 [ 00:24:58.254 { 00:24:58.254 "aliases": [ 00:24:58.254 "lvs/lvol" 00:24:58.254 ], 00:24:58.254 "assigned_rate_limits": { 00:24:58.254 "r_mbytes_per_sec": 0, 00:24:58.254 "rw_ios_per_sec": 0, 00:24:58.254 "rw_mbytes_per_sec": 0, 00:24:58.254 "w_mbytes_per_sec": 0 00:24:58.254 }, 00:24:58.254 "block_size": 4096, 00:24:58.254 "claimed": false, 00:24:58.254 "driver_specific": { 00:24:58.254 "lvol": { 00:24:58.254 "base_bdev": "aio_bdev", 00:24:58.254 "clone": false, 00:24:58.254 "esnap_clone": false, 00:24:58.254 "lvol_store_uuid": "e976eab1-560a-41dc-b77f-a552fd0a08c5", 00:24:58.254 "num_allocated_clusters": 38, 00:24:58.254 "snapshot": false, 00:24:58.254 "thin_provision": false 00:24:58.254 } 00:24:58.254 }, 00:24:58.254 "name": "5f4f93a6-98f4-4135-99eb-53d1477f639a", 00:24:58.254 "num_blocks": 38912, 00:24:58.254 "product_name": "Logical Volume", 00:24:58.254 "supported_io_types": { 00:24:58.254 "abort": false, 00:24:58.254 "compare": false, 00:24:58.254 "compare_and_write": false, 00:24:58.254 "copy": false, 00:24:58.254 "flush": false, 00:24:58.254 "get_zone_info": false, 00:24:58.254 "nvme_admin": false, 00:24:58.254 "nvme_io": false, 00:24:58.254 "nvme_io_md": false, 00:24:58.254 "nvme_iov_md": false, 00:24:58.254 "read": true, 00:24:58.254 "reset": true, 00:24:58.254 "seek_data": true, 00:24:58.254 "seek_hole": true, 00:24:58.254 "unmap": true, 00:24:58.254 "write": true, 00:24:58.254 "write_zeroes": true, 00:24:58.254 "zcopy": false, 00:24:58.254 "zone_append": false, 00:24:58.254 "zone_management": false 00:24:58.254 }, 00:24:58.254 "uuid": "5f4f93a6-98f4-4135-99eb-53d1477f639a", 00:24:58.254 "zoned": false 00:24:58.254 } 00:24:58.254 ] 00:24:58.254 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:24:58.254 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:58.255 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:24:58.524 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:24:58.524 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:24:58.524 17:24:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:58.783 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:24:58.783 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:24:59.350 [2024-12-06 17:24:41.351485] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:59.350 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:24:59.609 2024/12/06 17:24:41 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e976eab1-560a-41dc-b77f-a552fd0a08c5], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:24:59.609 request: 00:24:59.609 { 00:24:59.609 "method": "bdev_lvol_get_lvstores", 00:24:59.609 "params": { 00:24:59.609 "uuid": "e976eab1-560a-41dc-b77f-a552fd0a08c5" 00:24:59.609 } 00:24:59.609 } 00:24:59.609 Got JSON-RPC error response 00:24:59.609 GoRPCClient: error on JSON-RPC call 00:24:59.609 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:24:59.609 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:59.609 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:59.609 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:59.609 17:24:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:24:59.867 aio_bdev 00:24:59.867 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5f4f93a6-98f4-4135-99eb-53d1477f639a 00:24:59.867 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5f4f93a6-98f4-4135-99eb-53d1477f639a 00:24:59.867 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:24:59.867 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:24:59.867 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:24:59.867 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:24:59.867 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:25:00.126 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f4f93a6-98f4-4135-99eb-53d1477f639a -t 2000 00:25:00.384 [ 00:25:00.384 { 00:25:00.384 "aliases": [ 00:25:00.384 "lvs/lvol" 00:25:00.384 ], 00:25:00.384 "assigned_rate_limits": { 00:25:00.384 "r_mbytes_per_sec": 0, 00:25:00.384 "rw_ios_per_sec": 0, 00:25:00.384 "rw_mbytes_per_sec": 0, 00:25:00.384 "w_mbytes_per_sec": 0 00:25:00.384 }, 00:25:00.384 "block_size": 4096, 00:25:00.384 "claimed": false, 00:25:00.384 "driver_specific": { 00:25:00.384 "lvol": { 00:25:00.384 "base_bdev": "aio_bdev", 00:25:00.384 "clone": false, 00:25:00.384 "esnap_clone": false, 00:25:00.384 "lvol_store_uuid": "e976eab1-560a-41dc-b77f-a552fd0a08c5", 00:25:00.384 "num_allocated_clusters": 38, 00:25:00.384 "snapshot": false, 00:25:00.384 "thin_provision": false 00:25:00.384 } 00:25:00.384 }, 00:25:00.384 "name": "5f4f93a6-98f4-4135-99eb-53d1477f639a", 00:25:00.384 "num_blocks": 38912, 00:25:00.384 "product_name": "Logical Volume", 00:25:00.384 "supported_io_types": { 00:25:00.384 "abort": false, 00:25:00.384 "compare": false, 00:25:00.384 "compare_and_write": false, 00:25:00.384 "copy": false, 00:25:00.384 "flush": false, 00:25:00.384 "get_zone_info": false, 00:25:00.384 "nvme_admin": false, 00:25:00.384 "nvme_io": false, 00:25:00.384 "nvme_io_md": false, 00:25:00.384 "nvme_iov_md": false, 00:25:00.384 "read": true, 00:25:00.384 "reset": true, 00:25:00.384 "seek_data": true, 00:25:00.384 "seek_hole": true, 00:25:00.384 "unmap": true, 00:25:00.384 "write": true, 00:25:00.384 "write_zeroes": true, 00:25:00.384 "zcopy": false, 00:25:00.384 "zone_append": false, 00:25:00.384 "zone_management": false 00:25:00.384 }, 00:25:00.384 "uuid": "5f4f93a6-98f4-4135-99eb-53d1477f639a", 00:25:00.384 "zoned": false 00:25:00.384 } 00:25:00.384 ] 00:25:00.384 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:25:00.384 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:25:00.384 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:25:00.952 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:25:00.952 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:25:00.952 17:24:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:25:01.230 17:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:25:01.230 17:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5f4f93a6-98f4-4135-99eb-53d1477f639a 00:25:01.497 17:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e976eab1-560a-41dc-b77f-a552fd0a08c5 00:25:01.756 17:24:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:25:02.015 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:25:02.585 00:25:02.585 real 0m21.114s 00:25:02.585 user 0m29.237s 00:25:02.585 sys 0m7.694s 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:25:02.585 ************************************ 00:25:02.585 END TEST lvs_grow_dirty 00:25:02.585 ************************************ 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:02.585 nvmf_trace.0 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:02.585 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:25:02.846 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:02.846 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:25:02.846 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:02.846 17:24:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:02.846 rmmod nvme_tcp 00:25:02.846 rmmod nvme_fabrics 00:25:02.846 rmmod nvme_keyring 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 102726 ']' 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 102726 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 102726 ']' 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 102726 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102726 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:02.846 killing process with pid 102726 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102726' 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 102726 00:25:02.846 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 102726 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:03.113 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:25:03.380 00:25:03.380 real 0m42.115s 00:25:03.380 user 0m48.581s 00:25:03.380 sys 0m10.689s 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:25:03.380 ************************************ 00:25:03.380 END TEST nvmf_lvs_grow 00:25:03.380 ************************************ 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:03.380 ************************************ 00:25:03.380 START TEST nvmf_bdev_io_wait 00:25:03.380 ************************************ 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:25:03.380 * Looking for test storage... 00:25:03.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:25:03.380 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:25:03.639 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:03.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.640 --rc genhtml_branch_coverage=1 00:25:03.640 --rc genhtml_function_coverage=1 00:25:03.640 --rc genhtml_legend=1 00:25:03.640 --rc geninfo_all_blocks=1 00:25:03.640 --rc geninfo_unexecuted_blocks=1 00:25:03.640 00:25:03.640 ' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:03.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.640 --rc genhtml_branch_coverage=1 00:25:03.640 --rc genhtml_function_coverage=1 00:25:03.640 --rc genhtml_legend=1 00:25:03.640 --rc geninfo_all_blocks=1 00:25:03.640 --rc geninfo_unexecuted_blocks=1 00:25:03.640 00:25:03.640 ' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:03.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.640 --rc genhtml_branch_coverage=1 00:25:03.640 --rc genhtml_function_coverage=1 00:25:03.640 --rc genhtml_legend=1 00:25:03.640 --rc geninfo_all_blocks=1 00:25:03.640 --rc geninfo_unexecuted_blocks=1 00:25:03.640 00:25:03.640 ' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:03.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.640 --rc genhtml_branch_coverage=1 00:25:03.640 --rc genhtml_function_coverage=1 00:25:03.640 --rc genhtml_legend=1 00:25:03.640 --rc geninfo_all_blocks=1 00:25:03.640 --rc geninfo_unexecuted_blocks=1 00:25:03.640 00:25:03.640 ' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:03.640 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:03.641 Cannot find device "nvmf_init_br" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:03.641 Cannot find device "nvmf_init_br2" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:03.641 Cannot find device "nvmf_tgt_br" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:03.641 Cannot find device "nvmf_tgt_br2" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:03.641 Cannot find device "nvmf_init_br" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:03.641 Cannot find device "nvmf_init_br2" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:03.641 Cannot find device "nvmf_tgt_br" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:03.641 Cannot find device "nvmf_tgt_br2" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:03.641 Cannot find device "nvmf_br" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:03.641 Cannot find device "nvmf_init_if" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:03.641 Cannot find device "nvmf_init_if2" 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:03.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:03.641 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:03.641 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:03.900 17:24:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:03.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:03.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.098 ms 00:25:03.900 00:25:03.900 --- 10.0.0.3 ping statistics --- 00:25:03.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.900 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:03.900 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:03.900 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:25:03.900 00:25:03.900 --- 10.0.0.4 ping statistics --- 00:25:03.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.900 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:03.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:03.900 00:25:03.900 --- 10.0.0.1 ping statistics --- 00:25:03.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.900 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:03.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:25:03.900 00:25:03.900 --- 10.0.0.2 ping statistics --- 00:25:03.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.900 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:03.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=103189 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 103189 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 103189 ']' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.900 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.159 [2024-12-06 17:24:46.243153] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:04.159 [2024-12-06 17:24:46.244761] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:04.159 [2024-12-06 17:24:46.245649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.159 [2024-12-06 17:24:46.401134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:04.159 [2024-12-06 17:24:46.442953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.159 [2024-12-06 17:24:46.443025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.159 [2024-12-06 17:24:46.443040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.159 [2024-12-06 17:24:46.443050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.159 [2024-12-06 17:24:46.443059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.159 [2024-12-06 17:24:46.443990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.159 [2024-12-06 17:24:46.444299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:04.159 [2024-12-06 17:24:46.444971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:04.159 [2024-12-06 17:24:46.445032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.159 [2024-12-06 17:24:46.446224] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 [2024-12-06 17:24:46.607114] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:04.445 [2024-12-06 17:24:46.607511] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:04.445 [2024-12-06 17:24:46.608632] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:04.445 [2024-12-06 17:24:46.608860] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 [2024-12-06 17:24:46.618702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 Malloc0 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:04.445 [2024-12-06 17:24:46.682832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=103230 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=103232 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=103234 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:04.445 { 00:25:04.445 "params": { 00:25:04.445 "name": "Nvme$subsystem", 00:25:04.445 "trtype": "$TEST_TRANSPORT", 00:25:04.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.445 "adrfam": "ipv4", 00:25:04.445 "trsvcid": "$NVMF_PORT", 00:25:04.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.445 "hdgst": ${hdgst:-false}, 00:25:04.445 "ddgst": ${ddgst:-false} 00:25:04.445 }, 00:25:04.445 "method": "bdev_nvme_attach_controller" 00:25:04.445 } 00:25:04.445 EOF 00:25:04.445 )") 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=103236 00:25:04.445 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:04.445 { 00:25:04.445 "params": { 00:25:04.445 "name": "Nvme$subsystem", 00:25:04.445 "trtype": "$TEST_TRANSPORT", 00:25:04.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.445 "adrfam": "ipv4", 00:25:04.445 "trsvcid": "$NVMF_PORT", 00:25:04.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.446 "hdgst": ${hdgst:-false}, 00:25:04.446 "ddgst": ${ddgst:-false} 00:25:04.446 }, 00:25:04.446 "method": "bdev_nvme_attach_controller" 00:25:04.446 } 00:25:04.446 EOF 00:25:04.446 )") 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:04.446 { 00:25:04.446 "params": { 00:25:04.446 "name": "Nvme$subsystem", 00:25:04.446 "trtype": "$TEST_TRANSPORT", 00:25:04.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.446 "adrfam": "ipv4", 00:25:04.446 "trsvcid": "$NVMF_PORT", 00:25:04.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.446 "hdgst": ${hdgst:-false}, 00:25:04.446 "ddgst": ${ddgst:-false} 00:25:04.446 }, 00:25:04.446 "method": "bdev_nvme_attach_controller" 00:25:04.446 } 00:25:04.446 EOF 00:25:04.446 )") 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:04.446 { 00:25:04.446 "params": { 00:25:04.446 "name": "Nvme$subsystem", 00:25:04.446 "trtype": "$TEST_TRANSPORT", 00:25:04.446 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:04.446 "adrfam": "ipv4", 00:25:04.446 "trsvcid": "$NVMF_PORT", 00:25:04.446 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:04.446 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:04.446 "hdgst": ${hdgst:-false}, 00:25:04.446 "ddgst": ${ddgst:-false} 00:25:04.446 }, 00:25:04.446 "method": "bdev_nvme_attach_controller" 00:25:04.446 } 00:25:04.446 EOF 00:25:04.446 )") 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:04.446 "params": { 00:25:04.446 "name": "Nvme1", 00:25:04.446 "trtype": "tcp", 00:25:04.446 "traddr": "10.0.0.3", 00:25:04.446 "adrfam": "ipv4", 00:25:04.446 "trsvcid": "4420", 00:25:04.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.446 "hdgst": false, 00:25:04.446 "ddgst": false 00:25:04.446 }, 00:25:04.446 "method": "bdev_nvme_attach_controller" 00:25:04.446 }' 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:04.446 "params": { 00:25:04.446 "name": "Nvme1", 00:25:04.446 "trtype": "tcp", 00:25:04.446 "traddr": "10.0.0.3", 00:25:04.446 "adrfam": "ipv4", 00:25:04.446 "trsvcid": "4420", 00:25:04.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.446 "hdgst": false, 00:25:04.446 "ddgst": false 00:25:04.446 }, 00:25:04.446 "method": "bdev_nvme_attach_controller" 00:25:04.446 }' 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:04.446 "params": { 00:25:04.446 "name": "Nvme1", 00:25:04.446 "trtype": "tcp", 00:25:04.446 "traddr": "10.0.0.3", 00:25:04.446 "adrfam": "ipv4", 00:25:04.446 "trsvcid": "4420", 00:25:04.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.446 "hdgst": false, 00:25:04.446 "ddgst": false 00:25:04.446 }, 00:25:04.446 "method": "bdev_nvme_attach_controller" 00:25:04.446 }' 00:25:04.446 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:25:04.704 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:25:04.704 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:04.704 "params": { 00:25:04.704 "name": "Nvme1", 00:25:04.704 "trtype": "tcp", 00:25:04.704 "traddr": "10.0.0.3", 00:25:04.704 "adrfam": "ipv4", 00:25:04.704 "trsvcid": "4420", 00:25:04.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:04.704 "hdgst": false, 00:25:04.704 "ddgst": false 00:25:04.704 }, 00:25:04.704 "method": "bdev_nvme_attach_controller" 00:25:04.704 }' 00:25:04.705 [2024-12-06 17:24:46.746550] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:04.705 [2024-12-06 17:24:46.747366] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:25:04.705 [2024-12-06 17:24:46.751073] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:04.705 [2024-12-06 17:24:46.751164] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:25:04.705 17:24:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 103230 00:25:04.705 [2024-12-06 17:24:46.789830] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:04.705 [2024-12-06 17:24:46.789944] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:25:04.705 [2024-12-06 17:24:46.791601] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:04.705 [2024-12-06 17:24:46.791686] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:25:04.705 [2024-12-06 17:24:46.941652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.705 [2024-12-06 17:24:46.972601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:04.705 [2024-12-06 17:24:46.974115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.705 [2024-12-06 17:24:47.000257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:04.963 [2024-12-06 17:24:47.024613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.963 [2024-12-06 17:24:47.058290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:04.963 [2024-12-06 17:24:47.067699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.963 Running I/O for 1 seconds... 00:25:04.963 [2024-12-06 17:24:47.105719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:04.963 Running I/O for 1 seconds... 00:25:04.963 Running I/O for 1 seconds... 00:25:04.963 Running I/O for 1 seconds... 00:25:05.899 6110.00 IOPS, 23.87 MiB/s [2024-12-06T17:24:48.197Z] 8307.00 IOPS, 32.45 MiB/s 00:25:05.899 Latency(us) 00:25:05.899 [2024-12-06T17:24:48.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.899 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:25:05.899 Nvme1n1 : 1.03 6076.65 23.74 0.00 0.00 20800.51 5064.15 46470.98 00:25:05.899 [2024-12-06T17:24:48.197Z] =================================================================================================================== 00:25:05.899 [2024-12-06T17:24:48.197Z] Total : 6076.65 23.74 0.00 0.00 20800.51 5064.15 46470.98 00:25:05.899 00:25:05.899 Latency(us) 00:25:05.899 [2024-12-06T17:24:48.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.899 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:25:05.899 Nvme1n1 : 1.01 8342.31 32.59 0.00 0.00 15255.01 4855.62 20256.58 00:25:05.899 [2024-12-06T17:24:48.197Z] =================================================================================================================== 00:25:05.899 [2024-12-06T17:24:48.197Z] Total : 8342.31 32.59 0.00 0.00 15255.01 4855.62 20256.58 00:25:05.900 146856.00 IOPS, 573.66 MiB/s 00:25:05.900 Latency(us) 00:25:05.900 [2024-12-06T17:24:48.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.900 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:25:05.900 Nvme1n1 : 1.00 146487.53 572.22 0.00 0.00 868.52 407.74 2606.55 00:25:05.900 [2024-12-06T17:24:48.198Z] =================================================================================================================== 00:25:05.900 [2024-12-06T17:24:48.198Z] Total : 146487.53 572.22 0.00 0.00 868.52 407.74 2606.55 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 103232 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 103234 00:25:06.160 6144.00 IOPS, 24.00 MiB/s 00:25:06.160 Latency(us) 00:25:06.160 [2024-12-06T17:24:48.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.160 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:25:06.160 Nvme1n1 : 1.02 6294.33 24.59 0.00 0.00 20273.51 3902.37 51713.86 00:25:06.160 [2024-12-06T17:24:48.458Z] =================================================================================================================== 00:25:06.160 [2024-12-06T17:24:48.458Z] Total : 6294.33 24.59 0.00 0.00 20273.51 3902.37 51713.86 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 103236 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.160 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.160 rmmod nvme_tcp 00:25:06.160 rmmod nvme_fabrics 00:25:06.160 rmmod nvme_keyring 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 103189 ']' 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 103189 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 103189 ']' 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 103189 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103189 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.420 killing process with pid 103189 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103189' 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 103189 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 103189 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:06.420 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:25:06.680 00:25:06.680 real 0m3.352s 00:25:06.680 user 0m11.887s 00:25:06.680 sys 0m2.189s 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:25:06.680 ************************************ 00:25:06.680 END TEST nvmf_bdev_io_wait 00:25:06.680 ************************************ 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:06.680 ************************************ 00:25:06.680 START TEST nvmf_queue_depth 00:25:06.680 ************************************ 00:25:06.680 17:24:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:25:06.942 * Looking for test storage... 00:25:06.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.942 --rc genhtml_branch_coverage=1 00:25:06.942 --rc genhtml_function_coverage=1 00:25:06.942 --rc genhtml_legend=1 00:25:06.942 --rc geninfo_all_blocks=1 00:25:06.942 --rc geninfo_unexecuted_blocks=1 00:25:06.942 00:25:06.942 ' 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.942 --rc genhtml_branch_coverage=1 00:25:06.942 --rc genhtml_function_coverage=1 00:25:06.942 --rc genhtml_legend=1 00:25:06.942 --rc geninfo_all_blocks=1 00:25:06.942 --rc geninfo_unexecuted_blocks=1 00:25:06.942 00:25:06.942 ' 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.942 --rc genhtml_branch_coverage=1 00:25:06.942 --rc genhtml_function_coverage=1 00:25:06.942 --rc genhtml_legend=1 00:25:06.942 --rc geninfo_all_blocks=1 00:25:06.942 --rc geninfo_unexecuted_blocks=1 00:25:06.942 00:25:06.942 ' 00:25:06.942 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:06.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:06.942 --rc genhtml_branch_coverage=1 00:25:06.942 --rc genhtml_function_coverage=1 00:25:06.942 --rc genhtml_legend=1 00:25:06.942 --rc geninfo_all_blocks=1 00:25:06.942 --rc geninfo_unexecuted_blocks=1 00:25:06.942 00:25:06.942 ' 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:06.943 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:06.944 Cannot find device "nvmf_init_br" 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:06.944 Cannot find device "nvmf_init_br2" 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:06.944 Cannot find device "nvmf_tgt_br" 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:06.944 Cannot find device "nvmf_tgt_br2" 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:06.944 Cannot find device "nvmf_init_br" 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:06.944 Cannot find device "nvmf_init_br2" 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:06.944 Cannot find device "nvmf_tgt_br" 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:25:06.944 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:07.204 Cannot find device "nvmf_tgt_br2" 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:07.204 Cannot find device "nvmf_br" 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:07.204 Cannot find device "nvmf_init_if" 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:07.204 Cannot find device "nvmf_init_if2" 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:07.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:07.204 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:07.204 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:07.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:07.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:25:07.465 00:25:07.465 --- 10.0.0.3 ping statistics --- 00:25:07.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.465 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:07.465 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:07.465 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:25:07.465 00:25:07.465 --- 10.0.0.4 ping statistics --- 00:25:07.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.465 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:07.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:07.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:07.465 00:25:07.465 --- 10.0.0.1 ping statistics --- 00:25:07.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.465 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:07.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:07.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:25:07.465 00:25:07.465 --- 10.0.0.2 ping statistics --- 00:25:07.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:07.465 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=103490 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 103490 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103490 ']' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.465 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.465 [2024-12-06 17:24:49.660249] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:07.465 [2024-12-06 17:24:49.661432] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:07.465 [2024-12-06 17:24:49.661501] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:07.725 [2024-12-06 17:24:49.815693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.725 [2024-12-06 17:24:49.853063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:07.725 [2024-12-06 17:24:49.853123] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:07.725 [2024-12-06 17:24:49.853136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:07.725 [2024-12-06 17:24:49.853146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:07.725 [2024-12-06 17:24:49.853155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:07.725 [2024-12-06 17:24:49.853552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.725 [2024-12-06 17:24:49.909385] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:07.725 [2024-12-06 17:24:49.909726] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.725 [2024-12-06 17:24:49.990535] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.725 17:24:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.985 Malloc0 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.985 [2024-12-06 17:24:50.046447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103525 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103525 /var/tmp/bdevperf.sock 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103525 ']' 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.985 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:07.985 [2024-12-06 17:24:50.125468] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:07.985 [2024-12-06 17:24:50.125608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103525 ] 00:25:08.244 [2024-12-06 17:24:50.288594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.244 [2024-12-06 17:24:50.328364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.244 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:08.244 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:25:08.244 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.244 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.244 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:08.244 NVMe0n1 00:25:08.244 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.244 17:24:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:08.502 Running I/O for 10 seconds... 00:25:10.373 7305.00 IOPS, 28.54 MiB/s [2024-12-06T17:24:54.045Z] 7780.50 IOPS, 30.39 MiB/s [2024-12-06T17:24:54.612Z] 7977.33 IOPS, 31.16 MiB/s [2024-12-06T17:24:55.986Z] 8029.50 IOPS, 31.37 MiB/s [2024-12-06T17:24:56.922Z] 8141.60 IOPS, 31.80 MiB/s [2024-12-06T17:24:57.860Z] 8197.17 IOPS, 32.02 MiB/s [2024-12-06T17:24:58.798Z] 8199.29 IOPS, 32.03 MiB/s [2024-12-06T17:24:59.802Z] 8205.88 IOPS, 32.05 MiB/s [2024-12-06T17:25:00.738Z] 8236.44 IOPS, 32.17 MiB/s [2024-12-06T17:25:00.738Z] 8275.50 IOPS, 32.33 MiB/s 00:25:18.440 Latency(us) 00:25:18.440 [2024-12-06T17:25:00.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.440 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:25:18.440 Verification LBA range: start 0x0 length 0x4000 00:25:18.440 NVMe0n1 : 10.09 8299.73 32.42 0.00 0.00 122729.12 34317.03 112006.98 00:25:18.440 [2024-12-06T17:25:00.738Z] =================================================================================================================== 00:25:18.440 [2024-12-06T17:25:00.738Z] Total : 8299.73 32.42 0.00 0.00 122729.12 34317.03 112006.98 00:25:18.440 { 00:25:18.440 "results": [ 00:25:18.440 { 00:25:18.440 "job": "NVMe0n1", 00:25:18.440 "core_mask": "0x1", 00:25:18.440 "workload": "verify", 00:25:18.440 "status": "finished", 00:25:18.440 "verify_range": { 00:25:18.440 "start": 0, 00:25:18.440 "length": 16384 00:25:18.440 }, 00:25:18.440 "queue_depth": 1024, 00:25:18.440 "io_size": 4096, 00:25:18.440 "runtime": 10.094178, 00:25:18.440 "iops": 8299.734757996144, 00:25:18.440 "mibps": 32.42083889842244, 00:25:18.440 "io_failed": 0, 00:25:18.440 "io_timeout": 0, 00:25:18.440 "avg_latency_us": 122729.11883589833, 00:25:18.440 "min_latency_us": 34317.03272727273, 00:25:18.440 "max_latency_us": 112006.98181818181 00:25:18.440 } 00:25:18.440 ], 00:25:18.440 "core_count": 1 00:25:18.440 } 00:25:18.440 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103525 00:25:18.440 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103525 ']' 00:25:18.440 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103525 00:25:18.440 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:25:18.440 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.440 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103525 00:25:18.699 killing process with pid 103525 00:25:18.699 Received shutdown signal, test time was about 10.000000 seconds 00:25:18.699 00:25:18.699 Latency(us) 00:25:18.699 [2024-12-06T17:25:00.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.699 [2024-12-06T17:25:00.997Z] =================================================================================================================== 00:25:18.699 [2024-12-06T17:25:00.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103525' 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103525 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103525 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.699 rmmod nvme_tcp 00:25:18.699 rmmod nvme_fabrics 00:25:18.699 rmmod nvme_keyring 00:25:18.699 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.958 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:25:18.958 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:25:18.958 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 103490 ']' 00:25:18.958 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 103490 00:25:18.959 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103490 ']' 00:25:18.959 17:25:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103490 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103490 00:25:18.959 killing process with pid 103490 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103490' 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103490 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103490 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:18.959 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:25:19.217 00:25:19.217 real 0m12.494s 00:25:19.217 user 0m20.801s 00:25:19.217 sys 0m2.287s 00:25:19.217 ************************************ 00:25:19.217 END TEST nvmf_queue_depth 00:25:19.217 ************************************ 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:19.217 ************************************ 00:25:19.217 START TEST nvmf_target_multipath 00:25:19.217 ************************************ 00:25:19.217 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:25:19.476 * Looking for test storage... 00:25:19.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:19.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.476 --rc genhtml_branch_coverage=1 00:25:19.476 --rc genhtml_function_coverage=1 00:25:19.476 --rc genhtml_legend=1 00:25:19.476 --rc geninfo_all_blocks=1 00:25:19.476 --rc geninfo_unexecuted_blocks=1 00:25:19.476 00:25:19.476 ' 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:19.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.476 --rc genhtml_branch_coverage=1 00:25:19.476 --rc genhtml_function_coverage=1 00:25:19.476 --rc genhtml_legend=1 00:25:19.476 --rc geninfo_all_blocks=1 00:25:19.476 --rc geninfo_unexecuted_blocks=1 00:25:19.476 00:25:19.476 ' 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:19.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.476 --rc genhtml_branch_coverage=1 00:25:19.476 --rc genhtml_function_coverage=1 00:25:19.476 --rc genhtml_legend=1 00:25:19.476 --rc geninfo_all_blocks=1 00:25:19.476 --rc geninfo_unexecuted_blocks=1 00:25:19.476 00:25:19.476 ' 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:19.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.476 --rc genhtml_branch_coverage=1 00:25:19.476 --rc genhtml_function_coverage=1 00:25:19.476 --rc genhtml_legend=1 00:25:19.476 --rc geninfo_all_blocks=1 00:25:19.476 --rc geninfo_unexecuted_blocks=1 00:25:19.476 00:25:19.476 ' 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.476 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:19.477 Cannot find device "nvmf_init_br" 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:19.477 Cannot find device "nvmf_init_br2" 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:19.477 Cannot find device "nvmf_tgt_br" 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:19.477 Cannot find device "nvmf_tgt_br2" 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:19.477 Cannot find device "nvmf_init_br" 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:19.477 Cannot find device "nvmf_init_br2" 00:25:19.477 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:25:19.478 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:19.735 Cannot find device "nvmf_tgt_br" 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:19.735 Cannot find device "nvmf_tgt_br2" 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:19.735 Cannot find device "nvmf_br" 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:19.735 Cannot find device "nvmf_init_if" 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:19.735 Cannot find device "nvmf_init_if2" 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:19.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:19.735 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:19.735 17:25:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:19.735 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:19.735 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:25:19.735 00:25:19.735 --- 10.0.0.3 ping statistics --- 00:25:19.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.735 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:19.735 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:19.735 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:25:19.735 00:25:19.735 --- 10.0.0.4 ping statistics --- 00:25:19.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.735 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:19.735 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:19.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:25:19.735 00:25:19.735 --- 10.0.0.1 ping statistics --- 00:25:19.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.735 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:19.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:25:19.993 00:25:19.993 --- 10.0.0.2 ping statistics --- 00:25:19.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.993 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=103887 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 103887 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 103887 ']' 00:25:19.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.993 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:19.993 [2024-12-06 17:25:02.154129] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:19.993 [2024-12-06 17:25:02.155911] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:19.993 [2024-12-06 17:25:02.156004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.252 [2024-12-06 17:25:02.318631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:20.252 [2024-12-06 17:25:02.359309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.252 [2024-12-06 17:25:02.359374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.252 [2024-12-06 17:25:02.359390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.252 [2024-12-06 17:25:02.359401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.252 [2024-12-06 17:25:02.359410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.252 [2024-12-06 17:25:02.360248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.252 [2024-12-06 17:25:02.360443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.252 [2024-12-06 17:25:02.360376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.252 [2024-12-06 17:25:02.360439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.252 [2024-12-06 17:25:02.414936] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:20.252 [2024-12-06 17:25:02.415094] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:20.252 [2024-12-06 17:25:02.415302] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:20.252 [2024-12-06 17:25:02.415573] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:25:20.252 [2024-12-06 17:25:02.416241] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:20.252 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.252 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:20.252 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:20.252 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:20.252 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:20.252 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.252 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:20.511 [2024-12-06 17:25:02.785319] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.769 17:25:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:21.027 Malloc0 00:25:21.027 17:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:25:21.286 17:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.544 17:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:21.801 [2024-12-06 17:25:03.953299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:21.801 17:25:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:25:22.078 [2024-12-06 17:25:04.229213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:25:22.078 17:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:25:22.078 17:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:25:22.336 17:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:25:22.336 17:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:25:22.336 17:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.336 17:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:22.336 17:25:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=104007 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:25:24.239 17:25:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:25:24.239 [global] 00:25:24.239 thread=1 00:25:24.239 invalidate=1 00:25:24.239 rw=randrw 00:25:24.239 time_based=1 00:25:24.239 runtime=6 00:25:24.239 ioengine=libaio 00:25:24.239 direct=1 00:25:24.239 bs=4096 00:25:24.239 iodepth=128 00:25:24.239 norandommap=0 00:25:24.239 numjobs=1 00:25:24.239 00:25:24.239 verify_dump=1 00:25:24.239 verify_backlog=512 00:25:24.239 verify_state_save=0 00:25:24.239 do_verify=1 00:25:24.239 verify=crc32c-intel 00:25:24.497 [job0] 00:25:24.497 filename=/dev/nvme0n1 00:25:24.497 Could not set queue depth (nvme0n1) 00:25:24.497 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:24.497 fio-3.35 00:25:24.497 Starting 1 thread 00:25:25.432 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:25.691 17:25:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:25.950 17:25:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:26.888 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:26.888 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:26.888 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:26.888 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:27.146 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:27.405 17:25:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:28.342 17:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:28.342 17:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:28.342 17:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:28.342 17:25:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 104007 00:25:30.873 00:25:30.873 job0: (groupid=0, jobs=1): err= 0: pid=104028: Fri Dec 6 17:25:12 2024 00:25:30.873 read: IOPS=10.5k, BW=41.0MiB/s (43.0MB/s)(246MiB/5999msec) 00:25:30.873 slat (usec): min=2, max=7095, avg=54.99, stdev=273.76 00:25:30.873 clat (usec): min=701, max=15767, avg=8154.31, stdev=1438.14 00:25:30.873 lat (usec): min=752, max=15828, avg=8209.30, stdev=1454.26 00:25:30.873 clat percentiles (usec): 00:25:30.873 | 1.00th=[ 4686], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7177], 00:25:30.873 | 30.00th=[ 7439], 40.00th=[ 7701], 50.00th=[ 7963], 60.00th=[ 8291], 00:25:30.873 | 70.00th=[ 8586], 80.00th=[ 9110], 90.00th=[10028], 95.00th=[10683], 00:25:30.873 | 99.00th=[12518], 99.50th=[13173], 99.90th=[14484], 99.95th=[14746], 00:25:30.873 | 99.99th=[15270] 00:25:30.873 bw ( KiB/s): min=10840, max=28000, per=52.66%, avg=22100.09, stdev=6430.83, samples=11 00:25:30.873 iops : min= 2710, max= 7000, avg=5525.00, stdev=1607.69, samples=11 00:25:30.873 write: IOPS=6275, BW=24.5MiB/s (25.7MB/s)(130MiB/5305msec); 0 zone resets 00:25:30.873 slat (usec): min=13, max=2657, avg=65.67, stdev=150.56 00:25:30.873 clat (usec): min=666, max=15244, avg=7437.10, stdev=1167.34 00:25:30.873 lat (usec): min=744, max=15281, avg=7502.76, stdev=1174.12 00:25:30.873 clat percentiles (usec): 00:25:30.873 | 1.00th=[ 3982], 5.00th=[ 5538], 10.00th=[ 6390], 20.00th=[ 6783], 00:25:30.873 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:25:30.873 | 70.00th=[ 7767], 80.00th=[ 8160], 90.00th=[ 8848], 95.00th=[ 9372], 00:25:30.873 | 99.00th=[10683], 99.50th=[11731], 99.90th=[13435], 99.95th=[13960], 00:25:30.873 | 99.99th=[14484] 00:25:30.873 bw ( KiB/s): min=11504, max=27713, per=88.20%, avg=22139.82, stdev=6128.00, samples=11 00:25:30.873 iops : min= 2876, max= 6928, avg=5534.91, stdev=1531.96, samples=11 00:25:30.873 lat (usec) : 750=0.01%, 1000=0.01% 00:25:30.873 lat (msec) : 2=0.05%, 4=0.58%, 10=92.44%, 20=6.93% 00:25:30.873 cpu : usr=5.70%, sys=22.87%, ctx=7407, majf=0, minf=102 00:25:30.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:25:30.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:30.874 issued rwts: total=62943,33289,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:30.874 00:25:30.874 Run status group 0 (all jobs): 00:25:30.874 READ: bw=41.0MiB/s (43.0MB/s), 41.0MiB/s-41.0MiB/s (43.0MB/s-43.0MB/s), io=246MiB (258MB), run=5999-5999msec 00:25:30.874 WRITE: bw=24.5MiB/s (25.7MB/s), 24.5MiB/s-24.5MiB/s (25.7MB/s-25.7MB/s), io=130MiB (136MB), run=5305-5305msec 00:25:30.874 00:25:30.874 Disk stats (read/write): 00:25:30.874 nvme0n1: ios=61959/32768, merge=0/0, ticks=473406/232661, in_queue=706067, util=98.65% 00:25:30.874 17:25:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:25:30.874 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:25:31.440 17:25:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:32.378 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:32.378 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:32.378 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:25:32.378 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:25:32.378 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=104165 00:25:32.378 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:25:32.378 17:25:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:25:32.378 [global] 00:25:32.378 thread=1 00:25:32.378 invalidate=1 00:25:32.378 rw=randrw 00:25:32.378 time_based=1 00:25:32.378 runtime=6 00:25:32.378 ioengine=libaio 00:25:32.378 direct=1 00:25:32.378 bs=4096 00:25:32.378 iodepth=128 00:25:32.378 norandommap=0 00:25:32.378 numjobs=1 00:25:32.378 00:25:32.378 verify_dump=1 00:25:32.378 verify_backlog=512 00:25:32.378 verify_state_save=0 00:25:32.378 do_verify=1 00:25:32.378 verify=crc32c-intel 00:25:32.378 [job0] 00:25:32.378 filename=/dev/nvme0n1 00:25:32.378 Could not set queue depth (nvme0n1) 00:25:32.378 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:25:32.378 fio-3.35 00:25:32.378 Starting 1 thread 00:25:33.315 17:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:33.572 17:25:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:33.830 17:25:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:35.206 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:35.206 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:35.206 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:35.206 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:35.206 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:35.465 17:25:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:25:36.852 17:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:25:36.853 17:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:25:36.853 17:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:25:36.853 17:25:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 104165 00:25:38.754 00:25:38.754 job0: (groupid=0, jobs=1): err= 0: pid=104186: Fri Dec 6 17:25:20 2024 00:25:38.754 read: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(281MiB/6005msec) 00:25:38.754 slat (usec): min=4, max=6087, avg=41.60, stdev=227.97 00:25:38.754 clat (usec): min=252, max=51053, avg=7218.40, stdev=2370.18 00:25:38.754 lat (usec): min=293, max=51063, avg=7260.00, stdev=2384.12 00:25:38.754 clat percentiles (usec): 00:25:38.754 | 1.00th=[ 3294], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5669], 00:25:38.754 | 30.00th=[ 6390], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7635], 00:25:38.754 | 70.00th=[ 7963], 80.00th=[ 8356], 90.00th=[ 8979], 95.00th=[ 9765], 00:25:38.754 | 99.00th=[11469], 99.50th=[12256], 99.90th=[47973], 99.95th=[50594], 00:25:38.754 | 99.99th=[51119] 00:25:38.754 bw ( KiB/s): min= 4304, max=42096, per=53.20%, avg=25485.73, stdev=10630.28, samples=11 00:25:38.754 iops : min= 1076, max=10524, avg=6371.36, stdev=2657.65, samples=11 00:25:38.754 write: IOPS=7498, BW=29.3MiB/s (30.7MB/s)(151MiB/5157msec); 0 zone resets 00:25:38.754 slat (usec): min=10, max=3729, avg=52.97, stdev=119.81 00:25:38.754 clat (usec): min=669, max=14235, avg=6276.55, stdev=1587.36 00:25:38.754 lat (usec): min=696, max=14259, avg=6329.52, stdev=1600.02 00:25:38.754 clat percentiles (usec): 00:25:38.755 | 1.00th=[ 2835], 5.00th=[ 3589], 10.00th=[ 4015], 20.00th=[ 4621], 00:25:38.755 | 30.00th=[ 5211], 40.00th=[ 6259], 50.00th=[ 6783], 60.00th=[ 7111], 00:25:38.755 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7898], 95.00th=[ 8160], 00:25:38.755 | 99.00th=[ 9765], 99.50th=[10683], 99.90th=[12125], 99.95th=[12649], 00:25:38.755 | 99.99th=[13435] 00:25:38.755 bw ( KiB/s): min= 4736, max=41584, per=85.03%, avg=25504.36, stdev=10368.37, samples=11 00:25:38.755 iops : min= 1184, max=10396, avg=6376.00, stdev=2592.17, samples=11 00:25:38.755 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:25:38.755 lat (msec) : 2=0.21%, 4=4.97%, 10=91.90%, 20=2.77%, 50=0.06% 00:25:38.755 lat (msec) : 100=0.05% 00:25:38.755 cpu : usr=6.76%, sys=25.05%, ctx=9546, majf=0, minf=127 00:25:38.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:25:38.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:38.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:38.755 issued rwts: total=71921,38669,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:38.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:38.755 00:25:38.755 Run status group 0 (all jobs): 00:25:38.755 READ: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=281MiB (295MB), run=6005-6005msec 00:25:38.755 WRITE: bw=29.3MiB/s (30.7MB/s), 29.3MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=151MiB (158MB), run=5157-5157msec 00:25:38.755 00:25:38.755 Disk stats (read/write): 00:25:38.755 nvme0n1: ios=70876/38114, merge=0/0, ticks=468143/220719, in_queue=688862, util=98.63% 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:38.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:25:38.755 17:25:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.015 rmmod nvme_tcp 00:25:39.015 rmmod nvme_fabrics 00:25:39.015 rmmod nvme_keyring 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 103887 ']' 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 103887 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 103887 ']' 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 103887 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103887 00:25:39.015 killing process with pid 103887 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103887' 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 103887 00:25:39.015 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 103887 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:39.275 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:25:39.534 00:25:39.534 real 0m20.188s 00:25:39.534 user 1m10.565s 00:25:39.534 sys 0m9.298s 00:25:39.534 ************************************ 00:25:39.534 END TEST nvmf_target_multipath 00:25:39.534 ************************************ 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:39.534 ************************************ 00:25:39.534 START TEST nvmf_zcopy 00:25:39.534 ************************************ 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:25:39.534 * Looking for test storage... 00:25:39.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:25:39.534 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:39.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.795 --rc genhtml_branch_coverage=1 00:25:39.795 --rc genhtml_function_coverage=1 00:25:39.795 --rc genhtml_legend=1 00:25:39.795 --rc geninfo_all_blocks=1 00:25:39.795 --rc geninfo_unexecuted_blocks=1 00:25:39.795 00:25:39.795 ' 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:39.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.795 --rc genhtml_branch_coverage=1 00:25:39.795 --rc genhtml_function_coverage=1 00:25:39.795 --rc genhtml_legend=1 00:25:39.795 --rc geninfo_all_blocks=1 00:25:39.795 --rc geninfo_unexecuted_blocks=1 00:25:39.795 00:25:39.795 ' 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:39.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.795 --rc genhtml_branch_coverage=1 00:25:39.795 --rc genhtml_function_coverage=1 00:25:39.795 --rc genhtml_legend=1 00:25:39.795 --rc geninfo_all_blocks=1 00:25:39.795 --rc geninfo_unexecuted_blocks=1 00:25:39.795 00:25:39.795 ' 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:39.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.795 --rc genhtml_branch_coverage=1 00:25:39.795 --rc genhtml_function_coverage=1 00:25:39.795 --rc genhtml_legend=1 00:25:39.795 --rc geninfo_all_blocks=1 00:25:39.795 --rc geninfo_unexecuted_blocks=1 00:25:39.795 00:25:39.795 ' 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.795 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:39.796 Cannot find device "nvmf_init_br" 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:39.796 Cannot find device "nvmf_init_br2" 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:39.796 Cannot find device "nvmf_tgt_br" 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.796 Cannot find device "nvmf_tgt_br2" 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:25:39.796 17:25:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:39.796 Cannot find device "nvmf_init_br" 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:39.796 Cannot find device "nvmf_init_br2" 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:39.796 Cannot find device "nvmf_tgt_br" 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:39.796 Cannot find device "nvmf_tgt_br2" 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:39.796 Cannot find device "nvmf_br" 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:39.796 Cannot find device "nvmf_init_if" 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:39.796 Cannot find device "nvmf_init_if2" 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.796 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:39.796 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:40.056 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:40.056 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:25:40.056 00:25:40.056 --- 10.0.0.3 ping statistics --- 00:25:40.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.056 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:40.056 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:40.056 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:25:40.056 00:25:40.056 --- 10.0.0.4 ping statistics --- 00:25:40.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.056 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:25:40.056 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:40.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:25:40.316 00:25:40.316 --- 10.0.0.1 ping statistics --- 00:25:40.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.316 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:40.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:25:40.316 00:25:40.316 --- 10.0.0.2 ping statistics --- 00:25:40.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.316 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:40.316 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=104516 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 104516 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 104516 ']' 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.317 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.317 [2024-12-06 17:25:22.445483] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:40.317 [2024-12-06 17:25:22.446746] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:40.317 [2024-12-06 17:25:22.446824] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.317 [2024-12-06 17:25:22.598132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.576 [2024-12-06 17:25:22.636986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.576 [2024-12-06 17:25:22.637067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.576 [2024-12-06 17:25:22.637081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.576 [2024-12-06 17:25:22.637091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.576 [2024-12-06 17:25:22.637100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.576 [2024-12-06 17:25:22.637510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.576 [2024-12-06 17:25:22.694176] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:40.576 [2024-12-06 17:25:22.694616] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.576 [2024-12-06 17:25:22.782446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.576 [2024-12-06 17:25:22.810776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.576 malloc0 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:40.576 { 00:25:40.576 "params": { 00:25:40.576 "name": "Nvme$subsystem", 00:25:40.576 "trtype": "$TEST_TRANSPORT", 00:25:40.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:40.576 "adrfam": "ipv4", 00:25:40.576 "trsvcid": "$NVMF_PORT", 00:25:40.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:40.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:40.576 "hdgst": ${hdgst:-false}, 00:25:40.576 "ddgst": ${ddgst:-false} 00:25:40.576 }, 00:25:40.576 "method": "bdev_nvme_attach_controller" 00:25:40.576 } 00:25:40.576 EOF 00:25:40.576 )") 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:25:40.576 17:25:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:40.576 "params": { 00:25:40.576 "name": "Nvme1", 00:25:40.576 "trtype": "tcp", 00:25:40.576 "traddr": "10.0.0.3", 00:25:40.576 "adrfam": "ipv4", 00:25:40.576 "trsvcid": "4420", 00:25:40.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:40.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:40.576 "hdgst": false, 00:25:40.576 "ddgst": false 00:25:40.576 }, 00:25:40.576 "method": "bdev_nvme_attach_controller" 00:25:40.576 }' 00:25:40.836 [2024-12-06 17:25:22.915134] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:40.836 [2024-12-06 17:25:22.915287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104548 ] 00:25:40.836 [2024-12-06 17:25:23.062336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.836 [2024-12-06 17:25:23.093799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.095 Running I/O for 10 seconds... 00:25:42.968 5620.00 IOPS, 43.91 MiB/s [2024-12-06T17:25:26.641Z] 5653.00 IOPS, 44.16 MiB/s [2024-12-06T17:25:27.579Z] 5672.67 IOPS, 44.32 MiB/s [2024-12-06T17:25:28.513Z] 5678.75 IOPS, 44.37 MiB/s [2024-12-06T17:25:29.444Z] 5652.40 IOPS, 44.16 MiB/s [2024-12-06T17:25:30.376Z] 5610.83 IOPS, 43.83 MiB/s [2024-12-06T17:25:31.307Z] 5616.29 IOPS, 43.88 MiB/s [2024-12-06T17:25:32.239Z] 5630.75 IOPS, 43.99 MiB/s [2024-12-06T17:25:33.644Z] 5643.00 IOPS, 44.09 MiB/s [2024-12-06T17:25:33.644Z] 5652.70 IOPS, 44.16 MiB/s 00:25:51.346 Latency(us) 00:25:51.346 [2024-12-06T17:25:33.644Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:51.346 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:25:51.346 Verification LBA range: start 0x0 length 0x1000 00:25:51.346 Nvme1n1 : 10.06 5632.63 44.00 0.00 0.00 22565.43 3932.16 44326.17 00:25:51.346 [2024-12-06T17:25:33.644Z] =================================================================================================================== 00:25:51.346 [2024-12-06T17:25:33.644Z] Total : 5632.63 44.00 0.00 0.00 22565.43 3932.16 44326.17 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=104661 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:51.346 { 00:25:51.346 "params": { 00:25:51.346 "name": "Nvme$subsystem", 00:25:51.346 "trtype": "$TEST_TRANSPORT", 00:25:51.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:51.346 "adrfam": "ipv4", 00:25:51.346 "trsvcid": "$NVMF_PORT", 00:25:51.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:51.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:51.346 "hdgst": ${hdgst:-false}, 00:25:51.346 "ddgst": ${ddgst:-false} 00:25:51.346 }, 00:25:51.346 "method": "bdev_nvme_attach_controller" 00:25:51.346 } 00:25:51.346 EOF 00:25:51.346 )") 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:25:51.346 [2024-12-06 17:25:33.430083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.346 [2024-12-06 17:25:33.430126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:25:51.346 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:25:51.346 17:25:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:51.346 "params": { 00:25:51.346 "name": "Nvme1", 00:25:51.346 "trtype": "tcp", 00:25:51.346 "traddr": "10.0.0.3", 00:25:51.346 "adrfam": "ipv4", 00:25:51.346 "trsvcid": "4420", 00:25:51.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:51.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:51.346 "hdgst": false, 00:25:51.346 "ddgst": false 00:25:51.346 }, 00:25:51.346 "method": "bdev_nvme_attach_controller" 00:25:51.346 }' 00:25:51.346 [2024-12-06 17:25:33.442060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.346 [2024-12-06 17:25:33.442092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.346 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.346 [2024-12-06 17:25:33.454050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.346 [2024-12-06 17:25:33.454078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.346 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.346 [2024-12-06 17:25:33.466046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.346 [2024-12-06 17:25:33.466073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.346 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.346 [2024-12-06 17:25:33.476590] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:25:51.346 [2024-12-06 17:25:33.476665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104661 ] 00:25:51.346 [2024-12-06 17:25:33.478048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.346 [2024-12-06 17:25:33.478074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.346 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.346 [2024-12-06 17:25:33.490045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.346 [2024-12-06 17:25:33.490071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.502041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.502067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.514072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.514097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.526122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.526152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.538042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.538068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.550058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.550084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.562077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.562106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.574114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.574156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.586056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.586082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.598072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.598097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.610041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.610067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.621488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.347 [2024-12-06 17:25:33.622049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.622077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.347 [2024-12-06 17:25:33.634078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.347 [2024-12-06 17:25:33.634112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.347 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.646064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.646095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.654611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.606 [2024-12-06 17:25:33.658051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.658078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.670083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.670124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.682087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.682127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.694078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.694112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.706081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.706119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.718065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.718101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.730064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.730098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.742068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.742103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.754062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.754093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.766057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.766088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.778061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.778095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 Running I/O for 5 seconds... 00:25:51.606 [2024-12-06 17:25:33.796132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.796169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.812523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.812562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.828606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.828642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.851165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.851203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.867175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.867211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.606 [2024-12-06 17:25:33.885480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.606 [2024-12-06 17:25:33.885519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.606 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:33.906165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:33.906205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:33.917058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:33.917095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:33.927685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:33.927721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:33.941534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:33.941589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:33.962886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:33.962924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:33.980551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:33.980603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:33 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:33.996529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:33.996580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:34.011716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:34.011754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:34.030042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:34.030095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.866 [2024-12-06 17:25:34.040485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.866 [2024-12-06 17:25:34.040523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.866 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.867 [2024-12-06 17:25:34.055653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.867 [2024-12-06 17:25:34.055696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.867 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.867 [2024-12-06 17:25:34.072329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.867 [2024-12-06 17:25:34.072386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.867 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.867 [2024-12-06 17:25:34.088489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.867 [2024-12-06 17:25:34.088546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.867 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.867 [2024-12-06 17:25:34.104023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.867 [2024-12-06 17:25:34.104079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.867 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.867 [2024-12-06 17:25:34.122657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.867 [2024-12-06 17:25:34.122750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.867 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.867 [2024-12-06 17:25:34.140777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.867 [2024-12-06 17:25:34.140831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.867 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:51.867 [2024-12-06 17:25:34.151232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:51.867 [2024-12-06 17:25:34.151297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:51.867 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.167735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.167771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.184279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.184342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.200322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.200372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.216719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.216771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.229335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.229416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.239420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.239455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.254581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.254633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.274838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.274875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.293830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.293882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.315529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.315565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.332578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.332615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.355258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.355312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.370166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.370218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.380074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.380124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.395298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.395360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.127 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.127 [2024-12-06 17:25:34.414598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.127 [2024-12-06 17:25:34.414634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.128 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.435049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.435119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.453750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.453787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.463729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.463765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.479031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.479067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.497789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.497826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.508104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.508155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.523101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.523138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.542046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.542087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.553043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.553094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.575211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.575252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.591231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.591295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.610966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.611003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.625659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.625694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.635694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.635743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.652214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.652250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.387 [2024-12-06 17:25:34.667863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.387 [2024-12-06 17:25:34.667899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.387 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.686085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.686137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.696659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.696697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.712832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.712882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.727745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.727797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.746596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.746631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.766436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.766472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.776566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.776604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 11166.00 IOPS, 87.23 MiB/s [2024-12-06T17:25:34.945Z] [2024-12-06 17:25:34.792702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.792768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.807395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.807430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.826212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.826274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.837263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.837309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.851184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.851236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.870675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.870747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.889776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.889825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.910159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.910210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.921184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.921243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.647 [2024-12-06 17:25:34.934682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.647 [2024-12-06 17:25:34.934733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.647 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:34.954170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:34.954249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:34.964005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:34.964045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:34.978623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:34.978663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:34.998887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:34.998936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.017961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.018008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.027875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.027917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.042198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.042243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.051842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.051885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.066921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.066976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.086589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.086637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.106642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.106692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.126751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.126796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.145931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.145982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.156071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.156115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.172061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.172111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.188682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.188735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:52.907 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:52.907 [2024-12-06 17:25:35.199363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:52.907 [2024-12-06 17:25:35.199417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.216307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.216353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.231713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.231759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.249872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.249916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.260109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.260163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.275236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.275300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.295206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.295246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.311855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.311911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.327876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.327919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.345985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.346031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.356001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.356048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.371684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.371752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.390621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.390692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.410207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.410286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.421688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.421739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.433985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.434030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.445207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.445291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.165 [2024-12-06 17:25:35.455637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.165 [2024-12-06 17:25:35.455675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.165 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.470884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.470927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.489868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.489930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.499694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.499750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.516120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.516165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.531281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.531320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.549527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.549575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.560001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.560058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.574090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.574135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.583673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.583711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.599861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.599907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.618259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.618320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.628706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.628746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.644277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.644329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.660603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.660662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.675492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.675536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.692450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.692510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.423 [2024-12-06 17:25:35.707899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.423 [2024-12-06 17:25:35.707942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.423 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.724473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.724540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.739793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.739861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.756601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.756685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.770844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.770909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 11205.00 IOPS, 87.54 MiB/s [2024-12-06T17:25:35.979Z] [2024-12-06 17:25:35.791354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.791407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.808005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.808057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.824181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.824252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.842215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.842274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.853164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.853203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.681 [2024-12-06 17:25:35.864410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.681 [2024-12-06 17:25:35.864452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.681 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.682 [2024-12-06 17:25:35.885532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.682 [2024-12-06 17:25:35.885582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.682 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.682 [2024-12-06 17:25:35.907454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.682 [2024-12-06 17:25:35.907503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.682 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.682 [2024-12-06 17:25:35.922768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.682 [2024-12-06 17:25:35.922812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.682 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.682 [2024-12-06 17:25:35.942634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.682 [2024-12-06 17:25:35.942682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.682 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.682 [2024-12-06 17:25:35.962310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.682 [2024-12-06 17:25:35.962356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.682 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.682 [2024-12-06 17:25:35.972480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.682 [2024-12-06 17:25:35.972521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.682 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.940 [2024-12-06 17:25:35.986869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.940 [2024-12-06 17:25:35.986926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.940 2024/12/06 17:25:35 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.940 [2024-12-06 17:25:36.006434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.940 [2024-12-06 17:25:36.006503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.940 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.940 [2024-12-06 17:25:36.017032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.940 [2024-12-06 17:25:36.017104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.940 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.034751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.034819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.055223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.055306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.071621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.071679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.090294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.090361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.101387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.101461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.120694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.120772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.143657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.143724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.157858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.157910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.168436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.168481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.187128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.187196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.206527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.206588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:53.941 [2024-12-06 17:25:36.226867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:53.941 [2024-12-06 17:25:36.226931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:53.941 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.246139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.246189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.257780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.257837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.274299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.274347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.284593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.284634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.298924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.298970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.321175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.321215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.336037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.336092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.352648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.352707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.367573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.291 [2024-12-06 17:25:36.367611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.291 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.291 [2024-12-06 17:25:36.384086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.384135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.400006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.400052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.416520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.416567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.431560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.431607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.447803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.447853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.466459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.466506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.484584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.484632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.500406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.500455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.515421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.515472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.534954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.535009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.292 [2024-12-06 17:25:36.555179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.292 [2024-12-06 17:25:36.555245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.292 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.577528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.577574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.595589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.595630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.612601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.612648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.634548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.634603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.651240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.651301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.669796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.669848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.680205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.680247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.694228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.694282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.704230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.704283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.720353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.720398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.736564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.736613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.751924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.751972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.565 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.565 [2024-12-06 17:25:36.770098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.565 [2024-12-06 17:25:36.770146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.566 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.566 [2024-12-06 17:25:36.780290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.566 [2024-12-06 17:25:36.780329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.566 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.566 11191.00 IOPS, 87.43 MiB/s [2024-12-06T17:25:36.864Z] [2024-12-06 17:25:36.796185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.566 [2024-12-06 17:25:36.796234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.566 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.566 [2024-12-06 17:25:36.811431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.566 [2024-12-06 17:25:36.811475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.566 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.566 [2024-12-06 17:25:36.829999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.566 [2024-12-06 17:25:36.830042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.566 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.566 [2024-12-06 17:25:36.840458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.566 [2024-12-06 17:25:36.840498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.566 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.566 [2024-12-06 17:25:36.856139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.566 [2024-12-06 17:25:36.856186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.566 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.871557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.871616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.887989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.888050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.904418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.904462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.914967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.915008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.931592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.931665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.950063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.950123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.960505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.960545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.975121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.975164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.989914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.989970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:36 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:36.999815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:36.999859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:37.014871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:37.014920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:37.035093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:37.035168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:37.049895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:37.049975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:37.061710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:37.061804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:37.078512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:37.078575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:37.098699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:37.098778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:54.825 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:54.825 [2024-12-06 17:25:37.119003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:54.825 [2024-12-06 17:25:37.119066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.133954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.134009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.144261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.144339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.158735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.158775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.178116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.178164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.188710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.188750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.204077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.204140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.219591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.219638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.238610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.238656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.259041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.259100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.275429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.275486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.294135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.294196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.304489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.304544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.321290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.321355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.343262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.343320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.358390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.358437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.085 [2024-12-06 17:25:37.369043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.085 [2024-12-06 17:25:37.369093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.085 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.383472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.383541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.401863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.401908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.412593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.412659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.430530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.430586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.451290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.451375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.468357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.468420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.483634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.483696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.500087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.500127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.516562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.516615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.531949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.531985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.548734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.548774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.563705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.563744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.580128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.580182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.595642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.595679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.614102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.614153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.624196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.624255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.345 [2024-12-06 17:25:37.638325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.345 [2024-12-06 17:25:37.638366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.345 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.648211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.648247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.662498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.662552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.683752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.683806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.700674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.700721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.716335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.716393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.732630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.732674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.747905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.747966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.764200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.764242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.780636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.780698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 11162.50 IOPS, 87.21 MiB/s [2024-12-06T17:25:37.903Z] [2024-12-06 17:25:37.791419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.791470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.807796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.807840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.824135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.824178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.840240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.840292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.855859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.855901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.872376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.872430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.605 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.605 [2024-12-06 17:25:37.887787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.605 [2024-12-06 17:25:37.887824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.606 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:37.904586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:37.904626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:37.921016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:37.921055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:37.935641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:37.935678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:37.951907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:37.951981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:37.969223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:37.969313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:37.990769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:37.990841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:37 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.010591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.010671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.030469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.030530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.048948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.049023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.059647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.059715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.074445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.074507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.084795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.084846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.100383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.100418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.115829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.115872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.132791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.132831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:55.866 [2024-12-06 17:25:38.146696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:55.866 [2024-12-06 17:25:38.146741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:55.866 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.167575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.167639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.182445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.182484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.203190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.203247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.222693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.222753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.242836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.242874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.261395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.261442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.271391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.271426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.286083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.286118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.296537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.296589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.311107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.311175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.330395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.330449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.340926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.340966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.358619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.358657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.378863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.378901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.397503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.397554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.407855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.407891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.126 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.126 [2024-12-06 17:25:38.422559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.126 [2024-12-06 17:25:38.422598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.385 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.443554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.443604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.458956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.458993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.478308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.478341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.488117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.488153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.503451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.503487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.519755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.519793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.536507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.536547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.550667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.550706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.570242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.570292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.580562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.580598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.595690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.595727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.614242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.614288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.624628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.624668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.639370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.639404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.657797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.657833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.386 [2024-12-06 17:25:38.678368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.386 [2024-12-06 17:25:38.678405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.386 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.689486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.689523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.704258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.704306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.720618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.720654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.742728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.742766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.761757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.761810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.782368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.782414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 11150.20 IOPS, 87.11 MiB/s [2024-12-06T17:25:38.944Z] [2024-12-06 17:25:38.793267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.793352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 00:25:56.646 Latency(us) 00:25:56.646 [2024-12-06T17:25:38.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.646 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:25:56.646 Nvme1n1 : 5.01 11151.53 87.12 0.00 0.00 11463.16 2681.02 18826.71 00:25:56.646 [2024-12-06T17:25:38.944Z] =================================================================================================================== 00:25:56.646 [2024-12-06T17:25:38.944Z] Total : 11151.53 87.12 0.00 0.00 11463.16 2681.02 18826.71 00:25:56.646 [2024-12-06 17:25:38.802181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.802217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.814123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.814187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.826124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.826185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.838144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.646 [2024-12-06 17:25:38.838206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.646 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.646 [2024-12-06 17:25:38.850104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.647 [2024-12-06 17:25:38.850152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.647 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.647 [2024-12-06 17:25:38.862130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.647 [2024-12-06 17:25:38.862188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.647 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.647 [2024-12-06 17:25:38.874088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.647 [2024-12-06 17:25:38.874137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.647 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.647 [2024-12-06 17:25:38.886073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.647 [2024-12-06 17:25:38.886110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.647 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.647 [2024-12-06 17:25:38.898078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.647 [2024-12-06 17:25:38.898108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.647 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.647 [2024-12-06 17:25:38.910109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.647 [2024-12-06 17:25:38.910152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.647 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.647 [2024-12-06 17:25:38.922050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.647 [2024-12-06 17:25:38.922080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.647 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.647 [2024-12-06 17:25:38.934043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:25:56.647 [2024-12-06 17:25:38.934068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:25:56.647 2024/12/06 17:25:38 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:25:56.647 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (104661) - No such process 00:25:56.647 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 104661 00:25:56.647 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:56.647 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.647 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:56.905 delay0 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.905 17:25:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:25:56.905 [2024-12-06 17:25:39.143684] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:05.048 Initializing NVMe Controllers 00:26:05.048 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.048 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:05.048 Initialization complete. Launching workers. 00:26:05.048 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 224, failed: 27835 00:26:05.048 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 27908, failed to submit 151 00:26:05.048 success 27838, unsuccessful 70, failed 0 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:05.048 rmmod nvme_tcp 00:26:05.048 rmmod nvme_fabrics 00:26:05.048 rmmod nvme_keyring 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 104516 ']' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 104516 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 104516 ']' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 104516 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104516 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104516' 00:26:05.048 killing process with pid 104516 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 104516 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 104516 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:26:05.048 00:26:05.048 real 0m24.927s 00:26:05.048 user 0m38.349s 00:26:05.048 sys 0m8.098s 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:26:05.048 ************************************ 00:26:05.048 END TEST nvmf_zcopy 00:26:05.048 ************************************ 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:05.048 ************************************ 00:26:05.048 START TEST nvmf_nmic 00:26:05.048 ************************************ 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:26:05.048 * Looking for test storage... 00:26:05.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:05.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.048 --rc genhtml_branch_coverage=1 00:26:05.048 --rc genhtml_function_coverage=1 00:26:05.048 --rc genhtml_legend=1 00:26:05.048 --rc geninfo_all_blocks=1 00:26:05.048 --rc geninfo_unexecuted_blocks=1 00:26:05.048 00:26:05.048 ' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:05.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.048 --rc genhtml_branch_coverage=1 00:26:05.048 --rc genhtml_function_coverage=1 00:26:05.048 --rc genhtml_legend=1 00:26:05.048 --rc geninfo_all_blocks=1 00:26:05.048 --rc geninfo_unexecuted_blocks=1 00:26:05.048 00:26:05.048 ' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:05.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.048 --rc genhtml_branch_coverage=1 00:26:05.048 --rc genhtml_function_coverage=1 00:26:05.048 --rc genhtml_legend=1 00:26:05.048 --rc geninfo_all_blocks=1 00:26:05.048 --rc geninfo_unexecuted_blocks=1 00:26:05.048 00:26:05.048 ' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:05.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:05.048 --rc genhtml_branch_coverage=1 00:26:05.048 --rc genhtml_function_coverage=1 00:26:05.048 --rc genhtml_legend=1 00:26:05.048 --rc geninfo_all_blocks=1 00:26:05.048 --rc geninfo_unexecuted_blocks=1 00:26:05.048 00:26:05.048 ' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.048 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:05.049 Cannot find device "nvmf_init_br" 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:05.049 Cannot find device "nvmf_init_br2" 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:05.049 Cannot find device "nvmf_tgt_br" 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:05.049 Cannot find device "nvmf_tgt_br2" 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:05.049 Cannot find device "nvmf_init_br" 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:05.049 Cannot find device "nvmf_init_br2" 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:05.049 Cannot find device "nvmf_tgt_br" 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:05.049 Cannot find device "nvmf_tgt_br2" 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:26:05.049 17:25:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:05.049 Cannot find device "nvmf_br" 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:05.049 Cannot find device "nvmf_init_if" 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:05.049 Cannot find device "nvmf_init_if2" 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:05.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:05.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:05.049 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:05.050 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:05.050 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:26:05.050 00:26:05.050 --- 10.0.0.3 ping statistics --- 00:26:05.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.050 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:05.050 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:05.050 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:26:05.050 00:26:05.050 --- 10.0.0.4 ping statistics --- 00:26:05.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.050 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:05.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:05.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:26:05.050 00:26:05.050 --- 10.0.0.1 ping statistics --- 00:26:05.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.050 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:05.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:05.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:26:05.050 00:26:05.050 --- 10.0.0.2 ping statistics --- 00:26:05.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:05.050 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=105032 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 105032 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 105032 ']' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.050 17:25:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:05.308 [2024-12-06 17:25:47.371309] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:05.308 [2024-12-06 17:25:47.372341] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:26:05.308 [2024-12-06 17:25:47.372397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.308 [2024-12-06 17:25:47.523659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:05.308 [2024-12-06 17:25:47.565222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:05.308 [2024-12-06 17:25:47.565303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:05.308 [2024-12-06 17:25:47.565345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:05.308 [2024-12-06 17:25:47.565356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:05.308 [2024-12-06 17:25:47.565365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:05.308 [2024-12-06 17:25:47.566146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.308 [2024-12-06 17:25:47.566772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:05.308 [2024-12-06 17:25:47.566848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:05.308 [2024-12-06 17:25:47.566849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.566 [2024-12-06 17:25:47.625720] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:05.566 [2024-12-06 17:25:47.626218] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:05.566 [2024-12-06 17:25:47.626342] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:05.566 [2024-12-06 17:25:47.626455] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:05.566 [2024-12-06 17:25:47.626839] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:06.133 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.133 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:26:06.133 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:06.133 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.133 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.391 [2024-12-06 17:25:48.456116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.391 Malloc0 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.391 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 [2024-12-06 17:25:48.524251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.392 test case1: single bdev can't be used in multiple subsystems 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 [2024-12-06 17:25:48.552053] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:26:06.392 [2024-12-06 17:25:48.552095] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:26:06.392 [2024-12-06 17:25:48.552110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:26:06.392 request: 00:26:06.392 { 00:26:06.392 "method": "nvmf_subsystem_add_ns", 00:26:06.392 "params": { 00:26:06.392 2024/12/06 17:25:48 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 hide_metadata:%!s(bool=false) no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:26:06.392 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:26:06.392 "namespace": { 00:26:06.392 "bdev_name": "Malloc0", 00:26:06.392 "no_auto_visible": false, 00:26:06.392 "hide_metadata": false 00:26:06.392 } 00:26:06.392 } 00:26:06.392 } 00:26:06.392 Got JSON-RPC error response 00:26:06.392 GoRPCClient: error on JSON-RPC call 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:26:06.392 Adding namespace failed - expected result. 00:26:06.392 test case2: host connect to nvmf target in multiple paths 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:06.392 [2024-12-06 17:25:48.564175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:06.392 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:26:06.650 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:26:06.650 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:26:06.650 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:06.650 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:06.650 17:25:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:26:08.551 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:08.551 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:08.552 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:08.552 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:08.552 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:08.552 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:26:08.552 17:25:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:08.552 [global] 00:26:08.552 thread=1 00:26:08.552 invalidate=1 00:26:08.552 rw=write 00:26:08.552 time_based=1 00:26:08.552 runtime=1 00:26:08.552 ioengine=libaio 00:26:08.552 direct=1 00:26:08.552 bs=4096 00:26:08.552 iodepth=1 00:26:08.552 norandommap=0 00:26:08.552 numjobs=1 00:26:08.552 00:26:08.552 verify_dump=1 00:26:08.552 verify_backlog=512 00:26:08.552 verify_state_save=0 00:26:08.552 do_verify=1 00:26:08.552 verify=crc32c-intel 00:26:08.552 [job0] 00:26:08.552 filename=/dev/nvme0n1 00:26:08.552 Could not set queue depth (nvme0n1) 00:26:08.837 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:08.837 fio-3.35 00:26:08.837 Starting 1 thread 00:26:09.770 00:26:09.770 job0: (groupid=0, jobs=1): err= 0: pid=105138: Fri Dec 6 17:25:52 2024 00:26:09.770 read: IOPS=2751, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:26:09.770 slat (nsec): min=15275, max=40874, avg=18489.01, stdev=3180.77 00:26:09.770 clat (usec): min=155, max=437, avg=171.52, stdev=10.10 00:26:09.770 lat (usec): min=172, max=460, avg=190.00, stdev=10.87 00:26:09.770 clat percentiles (usec): 00:26:09.770 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 163], 20.00th=[ 165], 00:26:09.770 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 169], 60.00th=[ 172], 00:26:09.770 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 186], 00:26:09.770 | 99.00th=[ 194], 99.50th=[ 212], 99.90th=[ 289], 99.95th=[ 302], 00:26:09.770 | 99.99th=[ 437] 00:26:09.770 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:26:09.770 slat (usec): min=21, max=156, avg=26.73, stdev= 6.63 00:26:09.770 clat (usec): min=102, max=2404, avg=124.76, stdev=73.63 00:26:09.770 lat (usec): min=127, max=2442, avg=151.49, stdev=74.78 00:26:09.770 clat percentiles (usec): 00:26:09.770 | 1.00th=[ 111], 5.00th=[ 113], 10.00th=[ 114], 20.00th=[ 116], 00:26:09.770 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 120], 00:26:09.770 | 70.00th=[ 121], 80.00th=[ 125], 90.00th=[ 131], 95.00th=[ 137], 00:26:09.770 | 99.00th=[ 178], 99.50th=[ 343], 99.90th=[ 1483], 99.95th=[ 1860], 00:26:09.770 | 99.99th=[ 2409] 00:26:09.770 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:26:09.770 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:26:09.770 lat (usec) : 250=99.62%, 500=0.21%, 750=0.05%, 1000=0.03% 00:26:09.770 lat (msec) : 2=0.07%, 4=0.02% 00:26:09.770 cpu : usr=1.90%, sys=10.50%, ctx=5844, majf=0, minf=5 00:26:09.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:09.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:09.770 issued rwts: total=2754,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:09.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:09.770 00:26:09.770 Run status group 0 (all jobs): 00:26:09.770 READ: bw=10.7MiB/s (11.3MB/s), 10.7MiB/s-10.7MiB/s (11.3MB/s-11.3MB/s), io=10.8MiB (11.3MB), run=1001-1001msec 00:26:09.770 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:26:09.770 00:26:09.770 Disk stats (read/write): 00:26:09.770 nvme0n1: ios=2610/2617, merge=0/0, ticks=463/340, in_queue=803, util=91.18% 00:26:09.770 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:10.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.027 rmmod nvme_tcp 00:26:10.027 rmmod nvme_fabrics 00:26:10.027 rmmod nvme_keyring 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 105032 ']' 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 105032 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 105032 ']' 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 105032 00:26:10.027 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105032 00:26:10.284 killing process with pid 105032 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105032' 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 105032 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 105032 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:10.284 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:26:10.541 00:26:10.541 real 0m6.055s 00:26:10.541 user 0m14.821s 00:26:10.541 sys 0m2.389s 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:26:10.541 ************************************ 00:26:10.541 END TEST nvmf_nmic 00:26:10.541 ************************************ 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:10.541 ************************************ 00:26:10.541 START TEST nvmf_fio_target 00:26:10.541 ************************************ 00:26:10.541 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:26:10.827 * Looking for test storage... 00:26:10.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:10.827 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:10.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.828 --rc genhtml_branch_coverage=1 00:26:10.828 --rc genhtml_function_coverage=1 00:26:10.828 --rc genhtml_legend=1 00:26:10.828 --rc geninfo_all_blocks=1 00:26:10.828 --rc geninfo_unexecuted_blocks=1 00:26:10.828 00:26:10.828 ' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:10.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.828 --rc genhtml_branch_coverage=1 00:26:10.828 --rc genhtml_function_coverage=1 00:26:10.828 --rc genhtml_legend=1 00:26:10.828 --rc geninfo_all_blocks=1 00:26:10.828 --rc geninfo_unexecuted_blocks=1 00:26:10.828 00:26:10.828 ' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:10.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.828 --rc genhtml_branch_coverage=1 00:26:10.828 --rc genhtml_function_coverage=1 00:26:10.828 --rc genhtml_legend=1 00:26:10.828 --rc geninfo_all_blocks=1 00:26:10.828 --rc geninfo_unexecuted_blocks=1 00:26:10.828 00:26:10.828 ' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:10.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:10.828 --rc genhtml_branch_coverage=1 00:26:10.828 --rc genhtml_function_coverage=1 00:26:10.828 --rc genhtml_legend=1 00:26:10.828 --rc geninfo_all_blocks=1 00:26:10.828 --rc geninfo_unexecuted_blocks=1 00:26:10.828 00:26:10.828 ' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:10.828 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:10.829 17:25:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:10.829 Cannot find device "nvmf_init_br" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:10.829 Cannot find device "nvmf_init_br2" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:10.829 Cannot find device "nvmf_tgt_br" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:10.829 Cannot find device "nvmf_tgt_br2" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:10.829 Cannot find device "nvmf_init_br" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:10.829 Cannot find device "nvmf_init_br2" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:10.829 Cannot find device "nvmf_tgt_br" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:10.829 Cannot find device "nvmf_tgt_br2" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:10.829 Cannot find device "nvmf_br" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:10.829 Cannot find device "nvmf_init_if" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:10.829 Cannot find device "nvmf_init_if2" 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:10.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:10.829 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:26:10.829 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:11.087 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:11.088 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:11.088 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:11.088 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:11.088 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:11.088 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:11.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:11.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:26:11.346 00:26:11.346 --- 10.0.0.3 ping statistics --- 00:26:11.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.346 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:11.346 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:11.346 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:26:11.346 00:26:11.346 --- 10.0.0.4 ping statistics --- 00:26:11.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.346 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:11.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:11.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:26:11.346 00:26:11.346 --- 10.0.0.1 ping statistics --- 00:26:11.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.346 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:11.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:11.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:26:11.346 00:26:11.346 --- 10.0.0.2 ping statistics --- 00:26:11.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:11.346 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=105366 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 105366 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 105366 ']' 00:26:11.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.346 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.346 [2024-12-06 17:25:53.486374] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:11.346 [2024-12-06 17:25:53.487867] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:26:11.346 [2024-12-06 17:25:53.488087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.605 [2024-12-06 17:25:53.643584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:11.605 [2024-12-06 17:25:53.683024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.605 [2024-12-06 17:25:53.683257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.605 [2024-12-06 17:25:53.683490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.605 [2024-12-06 17:25:53.683652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.605 [2024-12-06 17:25:53.683837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.605 [2024-12-06 17:25:53.684859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.605 [2024-12-06 17:25:53.684953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.605 [2024-12-06 17:25:53.685338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:11.605 [2024-12-06 17:25:53.685349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.605 [2024-12-06 17:25:53.742402] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:11.606 [2024-12-06 17:25:53.742660] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:11.606 [2024-12-06 17:25:53.743045] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:11.606 [2024-12-06 17:25:53.743502] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:11.606 [2024-12-06 17:25:53.744016] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:11.606 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.606 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:26:11.606 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:11.606 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:11.606 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:11.606 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.606 17:25:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:11.864 [2024-12-06 17:25:54.062199] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:11.864 17:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:12.432 17:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:26:12.432 17:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:12.690 17:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:26:12.690 17:25:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:12.954 17:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:26:12.954 17:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:13.213 17:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:26:13.213 17:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:26:13.470 17:25:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:14.034 17:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:26:14.034 17:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:14.034 17:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:26:14.034 17:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:14.599 17:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:26:14.599 17:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:26:14.857 17:25:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:15.140 17:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:26:15.140 17:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.398 17:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:26:15.398 17:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:15.656 17:25:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:15.915 [2024-12-06 17:25:58.126233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:15.915 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:26:16.174 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:26:16.742 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:16.742 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:26:16.742 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:26:16.742 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:16.742 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:26:16.742 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:26:16.742 17:25:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:26:18.644 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:18.644 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:18.644 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:18.644 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:26:18.644 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:18.644 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:26:18.644 17:26:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:26:18.644 [global] 00:26:18.644 thread=1 00:26:18.644 invalidate=1 00:26:18.644 rw=write 00:26:18.644 time_based=1 00:26:18.644 runtime=1 00:26:18.644 ioengine=libaio 00:26:18.644 direct=1 00:26:18.644 bs=4096 00:26:18.644 iodepth=1 00:26:18.644 norandommap=0 00:26:18.644 numjobs=1 00:26:18.644 00:26:18.644 verify_dump=1 00:26:18.644 verify_backlog=512 00:26:18.644 verify_state_save=0 00:26:18.644 do_verify=1 00:26:18.644 verify=crc32c-intel 00:26:18.644 [job0] 00:26:18.644 filename=/dev/nvme0n1 00:26:18.644 [job1] 00:26:18.644 filename=/dev/nvme0n2 00:26:18.644 [job2] 00:26:18.644 filename=/dev/nvme0n3 00:26:18.644 [job3] 00:26:18.644 filename=/dev/nvme0n4 00:26:18.644 Could not set queue depth (nvme0n1) 00:26:18.644 Could not set queue depth (nvme0n2) 00:26:18.644 Could not set queue depth (nvme0n3) 00:26:18.644 Could not set queue depth (nvme0n4) 00:26:18.902 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:18.902 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:18.902 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:18.902 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:18.902 fio-3.35 00:26:18.902 Starting 4 threads 00:26:20.292 00:26:20.292 job0: (groupid=0, jobs=1): err= 0: pid=105651: Fri Dec 6 17:26:02 2024 00:26:20.292 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:26:20.292 slat (usec): min=12, max=112, avg=23.07, stdev=12.19 00:26:20.292 clat (usec): min=203, max=41228, avg=510.08, stdev=1280.07 00:26:20.292 lat (usec): min=222, max=41249, avg=533.15, stdev=1279.81 00:26:20.292 clat percentiles (usec): 00:26:20.293 | 1.00th=[ 249], 5.00th=[ 334], 10.00th=[ 351], 20.00th=[ 367], 00:26:20.293 | 30.00th=[ 383], 40.00th=[ 400], 50.00th=[ 424], 60.00th=[ 482], 00:26:20.293 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 619], 95.00th=[ 742], 00:26:20.293 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 1106], 99.95th=[41157], 00:26:20.293 | 99.99th=[41157] 00:26:20.293 write: IOPS=1123, BW=4496KiB/s (4603kB/s)(4500KiB/1001msec); 0 zone resets 00:26:20.293 slat (nsec): min=22003, max=85482, avg=32076.40, stdev=4417.68 00:26:20.293 clat (usec): min=150, max=993, avg=367.12, stdev=101.97 00:26:20.293 lat (usec): min=186, max=1022, avg=399.19, stdev=101.64 00:26:20.293 clat percentiles (usec): 00:26:20.293 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 265], 20.00th=[ 273], 00:26:20.293 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 359], 60.00th=[ 416], 00:26:20.293 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 478], 95.00th=[ 490], 00:26:20.293 | 99.00th=[ 758], 99.50th=[ 840], 99.90th=[ 955], 99.95th=[ 996], 00:26:20.293 | 99.99th=[ 996] 00:26:20.293 bw ( KiB/s): min= 5136, max= 5136, per=24.47%, avg=5136.00, stdev= 0.00, samples=1 00:26:20.293 iops : min= 1284, max= 1284, avg=1284.00, stdev= 0.00, samples=1 00:26:20.293 lat (usec) : 250=2.79%, 500=78.59%, 750=15.91%, 1000=2.56% 00:26:20.293 lat (msec) : 2=0.09%, 50=0.05% 00:26:20.293 cpu : usr=0.70%, sys=5.00%, ctx=2246, majf=0, minf=9 00:26:20.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.293 issued rwts: total=1024,1125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:20.293 job1: (groupid=0, jobs=1): err= 0: pid=105652: Fri Dec 6 17:26:02 2024 00:26:20.293 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:26:20.293 slat (nsec): min=21889, max=62997, avg=30125.65, stdev=6480.93 00:26:20.293 clat (usec): min=166, max=3875, avg=407.20, stdev=166.09 00:26:20.293 lat (usec): min=207, max=3911, avg=437.32, stdev=164.11 00:26:20.293 clat percentiles (usec): 00:26:20.293 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 318], 00:26:20.293 | 30.00th=[ 355], 40.00th=[ 379], 50.00th=[ 412], 60.00th=[ 465], 00:26:20.293 | 70.00th=[ 478], 80.00th=[ 490], 90.00th=[ 545], 95.00th=[ 603], 00:26:20.293 | 99.00th=[ 685], 99.50th=[ 824], 99.90th=[ 922], 99.95th=[ 3884], 00:26:20.293 | 99.99th=[ 3884] 00:26:20.293 write: IOPS=1463, BW=5854KiB/s (5995kB/s)(5860KiB/1001msec); 0 zone resets 00:26:20.293 slat (nsec): min=30680, max=86862, avg=39638.51, stdev=8377.10 00:26:20.293 clat (usec): min=135, max=1009, avg=332.36, stdev=101.55 00:26:20.293 lat (usec): min=177, max=1051, avg=372.00, stdev=97.94 00:26:20.293 clat percentiles (usec): 00:26:20.293 | 1.00th=[ 149], 5.00th=[ 188], 10.00th=[ 239], 20.00th=[ 251], 00:26:20.293 | 30.00th=[ 265], 40.00th=[ 285], 50.00th=[ 330], 60.00th=[ 351], 00:26:20.293 | 70.00th=[ 367], 80.00th=[ 420], 90.00th=[ 449], 95.00th=[ 486], 00:26:20.293 | 99.00th=[ 619], 99.50th=[ 807], 99.90th=[ 963], 99.95th=[ 1012], 00:26:20.293 | 99.99th=[ 1012] 00:26:20.293 bw ( KiB/s): min= 6880, max= 6880, per=32.78%, avg=6880.00, stdev= 0.00, samples=1 00:26:20.293 iops : min= 1720, max= 1720, avg=1720.00, stdev= 0.00, samples=1 00:26:20.293 lat (usec) : 250=18.08%, 500=72.80%, 750=8.48%, 1000=0.56% 00:26:20.293 lat (msec) : 2=0.04%, 4=0.04% 00:26:20.293 cpu : usr=1.60%, sys=6.80%, ctx=2490, majf=0, minf=5 00:26:20.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.293 issued rwts: total=1024,1465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:20.293 job2: (groupid=0, jobs=1): err= 0: pid=105653: Fri Dec 6 17:26:02 2024 00:26:20.293 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:26:20.293 slat (usec): min=12, max=106, avg=22.71, stdev=11.96 00:26:20.293 clat (usec): min=223, max=41131, avg=510.05, stdev=1276.64 00:26:20.293 lat (usec): min=251, max=41155, avg=532.76, stdev=1276.74 00:26:20.293 clat percentiles (usec): 00:26:20.293 | 1.00th=[ 265], 5.00th=[ 343], 10.00th=[ 355], 20.00th=[ 371], 00:26:20.293 | 30.00th=[ 388], 40.00th=[ 408], 50.00th=[ 429], 60.00th=[ 461], 00:26:20.293 | 70.00th=[ 529], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 758], 00:26:20.293 | 99.00th=[ 799], 99.50th=[ 832], 99.90th=[ 1123], 99.95th=[41157], 00:26:20.293 | 99.99th=[41157] 00:26:20.293 write: IOPS=1125, BW=4503KiB/s (4612kB/s)(4508KiB/1001msec); 0 zone resets 00:26:20.293 slat (nsec): min=18397, max=55700, avg=29673.86, stdev=4650.71 00:26:20.293 clat (usec): min=145, max=945, avg=369.28, stdev=98.04 00:26:20.293 lat (usec): min=183, max=973, avg=398.95, stdev=98.51 00:26:20.293 clat percentiles (usec): 00:26:20.293 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:26:20.293 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 359], 60.00th=[ 420], 00:26:20.293 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 478], 95.00th=[ 490], 00:26:20.293 | 99.00th=[ 652], 99.50th=[ 832], 99.90th=[ 930], 99.95th=[ 947], 00:26:20.293 | 99.99th=[ 947] 00:26:20.293 bw ( KiB/s): min= 5138, max= 5138, per=24.48%, avg=5138.00, stdev= 0.00, samples=1 00:26:20.293 iops : min= 1284, max= 1284, avg=1284.00, stdev= 0.00, samples=1 00:26:20.293 lat (usec) : 250=0.84%, 500=80.85%, 750=15.20%, 1000=2.98% 00:26:20.293 lat (msec) : 2=0.09%, 50=0.05% 00:26:20.293 cpu : usr=1.30%, sys=4.30%, ctx=2235, majf=0, minf=13 00:26:20.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.293 issued rwts: total=1024,1127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:20.293 job3: (groupid=0, jobs=1): err= 0: pid=105654: Fri Dec 6 17:26:02 2024 00:26:20.293 read: IOPS=1217, BW=4871KiB/s (4988kB/s)(4876KiB/1001msec) 00:26:20.293 slat (nsec): min=12952, max=64171, avg=24546.87, stdev=5754.58 00:26:20.293 clat (usec): min=235, max=971, avg=391.24, stdev=97.98 00:26:20.293 lat (usec): min=255, max=995, avg=415.79, stdev=97.05 00:26:20.293 clat percentiles (usec): 00:26:20.293 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 293], 00:26:20.293 | 30.00th=[ 310], 40.00th=[ 334], 50.00th=[ 404], 60.00th=[ 441], 00:26:20.293 | 70.00th=[ 461], 80.00th=[ 474], 90.00th=[ 490], 95.00th=[ 523], 00:26:20.293 | 99.00th=[ 635], 99.50th=[ 824], 99.90th=[ 971], 99.95th=[ 971], 00:26:20.293 | 99.99th=[ 971] 00:26:20.293 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:26:20.293 slat (nsec): min=14938, max=97512, avg=32161.79, stdev=9283.07 00:26:20.293 clat (usec): min=138, max=5652, avg=284.52, stdev=206.63 00:26:20.293 lat (usec): min=166, max=5697, avg=316.68, stdev=206.90 00:26:20.293 clat percentiles (usec): 00:26:20.293 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 192], 00:26:20.293 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 227], 60.00th=[ 289], 00:26:20.293 | 70.00th=[ 343], 80.00th=[ 379], 90.00th=[ 412], 95.00th=[ 449], 00:26:20.293 | 99.00th=[ 529], 99.50th=[ 799], 99.90th=[ 3785], 99.95th=[ 5669], 00:26:20.293 | 99.99th=[ 5669] 00:26:20.293 bw ( KiB/s): min= 8192, max= 8192, per=39.03%, avg=8192.00, stdev= 0.00, samples=1 00:26:20.293 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:20.293 lat (usec) : 250=31.58%, 500=64.28%, 750=3.48%, 1000=0.44% 00:26:20.293 lat (msec) : 2=0.11%, 4=0.07%, 10=0.04% 00:26:20.293 cpu : usr=1.30%, sys=6.30%, ctx=2804, majf=0, minf=9 00:26:20.293 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:20.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.293 issued rwts: total=1219,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.293 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:20.293 00:26:20.293 Run status group 0 (all jobs): 00:26:20.293 READ: bw=16.7MiB/s (17.6MB/s), 4092KiB/s-4871KiB/s (4190kB/s-4988kB/s), io=16.8MiB (17.6MB), run=1001-1001msec 00:26:20.294 WRITE: bw=20.5MiB/s (21.5MB/s), 4496KiB/s-6138KiB/s (4603kB/s-6285kB/s), io=20.5MiB (21.5MB), run=1001-1001msec 00:26:20.294 00:26:20.294 Disk stats (read/write): 00:26:20.294 nvme0n1: ios=854/1024, merge=0/0, ticks=472/382, in_queue=854, util=88.88% 00:26:20.294 nvme0n2: ios=1072/1081, merge=0/0, ticks=464/379, in_queue=843, util=89.47% 00:26:20.294 nvme0n3: ios=823/1024, merge=0/0, ticks=454/367, in_queue=821, util=89.47% 00:26:20.294 nvme0n4: ios=1024/1396, merge=0/0, ticks=410/376, in_queue=786, util=88.89% 00:26:20.294 17:26:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:26:20.294 [global] 00:26:20.294 thread=1 00:26:20.294 invalidate=1 00:26:20.294 rw=randwrite 00:26:20.294 time_based=1 00:26:20.294 runtime=1 00:26:20.294 ioengine=libaio 00:26:20.294 direct=1 00:26:20.294 bs=4096 00:26:20.294 iodepth=1 00:26:20.294 norandommap=0 00:26:20.294 numjobs=1 00:26:20.294 00:26:20.294 verify_dump=1 00:26:20.294 verify_backlog=512 00:26:20.294 verify_state_save=0 00:26:20.294 do_verify=1 00:26:20.294 verify=crc32c-intel 00:26:20.294 [job0] 00:26:20.294 filename=/dev/nvme0n1 00:26:20.294 [job1] 00:26:20.294 filename=/dev/nvme0n2 00:26:20.294 [job2] 00:26:20.294 filename=/dev/nvme0n3 00:26:20.294 [job3] 00:26:20.294 filename=/dev/nvme0n4 00:26:20.294 Could not set queue depth (nvme0n1) 00:26:20.294 Could not set queue depth (nvme0n2) 00:26:20.294 Could not set queue depth (nvme0n3) 00:26:20.294 Could not set queue depth (nvme0n4) 00:26:20.294 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:20.294 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:20.294 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:20.294 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:20.294 fio-3.35 00:26:20.294 Starting 4 threads 00:26:21.666 00:26:21.666 job0: (groupid=0, jobs=1): err= 0: pid=105707: Fri Dec 6 17:26:03 2024 00:26:21.667 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:26:21.667 slat (usec): min=14, max=115, avg=28.69, stdev=10.57 00:26:21.667 clat (usec): min=166, max=845, avg=320.39, stdev=55.04 00:26:21.667 lat (usec): min=208, max=879, avg=349.08, stdev=55.83 00:26:21.667 clat percentiles (usec): 00:26:21.667 | 1.00th=[ 198], 5.00th=[ 221], 10.00th=[ 262], 20.00th=[ 293], 00:26:21.667 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 326], 00:26:21.667 | 70.00th=[ 334], 80.00th=[ 338], 90.00th=[ 392], 95.00th=[ 429], 00:26:21.667 | 99.00th=[ 478], 99.50th=[ 510], 99.90th=[ 603], 99.95th=[ 848], 00:26:21.667 | 99.99th=[ 848] 00:26:21.667 write: IOPS=1699, BW=6797KiB/s (6960kB/s)(6804KiB/1001msec); 0 zone resets 00:26:21.667 slat (usec): min=21, max=114, avg=38.23, stdev=11.58 00:26:21.667 clat (usec): min=122, max=635, avg=228.17, stdev=28.58 00:26:21.667 lat (usec): min=167, max=683, avg=266.41, stdev=26.73 00:26:21.667 clat percentiles (usec): 00:26:21.667 | 1.00th=[ 147], 5.00th=[ 186], 10.00th=[ 198], 20.00th=[ 210], 00:26:21.667 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:26:21.667 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 273], 00:26:21.667 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 347], 99.95th=[ 635], 00:26:21.667 | 99.99th=[ 635] 00:26:21.667 bw ( KiB/s): min= 8192, max= 8192, per=24.72%, avg=8192.00, stdev= 0.00, samples=1 00:26:21.667 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:21.667 lat (usec) : 250=48.13%, 500=51.56%, 750=0.28%, 1000=0.03% 00:26:21.667 cpu : usr=2.50%, sys=7.90%, ctx=3238, majf=0, minf=11 00:26:21.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.667 issued rwts: total=1536,1701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:21.667 job1: (groupid=0, jobs=1): err= 0: pid=105708: Fri Dec 6 17:26:03 2024 00:26:21.667 read: IOPS=2166, BW=8667KiB/s (8875kB/s)(8676KiB/1001msec) 00:26:21.667 slat (nsec): min=14485, max=57333, avg=17434.10, stdev=3080.71 00:26:21.667 clat (usec): min=185, max=1253, avg=223.40, stdev=33.45 00:26:21.667 lat (usec): min=203, max=1286, avg=240.84, stdev=33.90 00:26:21.667 clat percentiles (usec): 00:26:21.667 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:26:21.667 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:26:21.667 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 255], 00:26:21.667 | 99.00th=[ 273], 99.50th=[ 318], 99.90th=[ 537], 99.95th=[ 873], 00:26:21.667 | 99.99th=[ 1254] 00:26:21.667 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:26:21.667 slat (usec): min=21, max=104, avg=24.52, stdev= 4.55 00:26:21.667 clat (usec): min=126, max=602, avg=158.55, stdev=19.38 00:26:21.667 lat (usec): min=148, max=648, avg=183.07, stdev=20.17 00:26:21.667 clat percentiles (usec): 00:26:21.667 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:26:21.667 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:26:21.667 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:26:21.667 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 273], 99.95th=[ 594], 00:26:21.667 | 99.99th=[ 603] 00:26:21.667 bw ( KiB/s): min=10664, max=10664, per=32.18%, avg=10664.00, stdev= 0.00, samples=1 00:26:21.667 iops : min= 2666, max= 2666, avg=2666.00, stdev= 0.00, samples=1 00:26:21.667 lat (usec) : 250=96.49%, 500=3.40%, 750=0.06%, 1000=0.02% 00:26:21.667 lat (msec) : 2=0.02% 00:26:21.667 cpu : usr=2.10%, sys=7.30%, ctx=4735, majf=0, minf=5 00:26:21.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.667 issued rwts: total=2169,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:21.667 job2: (groupid=0, jobs=1): err= 0: pid=105709: Fri Dec 6 17:26:03 2024 00:26:21.667 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:26:21.667 slat (nsec): min=14966, max=98713, avg=30111.65, stdev=9856.96 00:26:21.667 clat (usec): min=202, max=3460, avg=325.58, stdev=116.82 00:26:21.667 lat (usec): min=223, max=3491, avg=355.69, stdev=117.11 00:26:21.667 clat percentiles (usec): 00:26:21.667 | 1.00th=[ 221], 5.00th=[ 241], 10.00th=[ 273], 20.00th=[ 297], 00:26:21.667 | 30.00th=[ 306], 40.00th=[ 314], 50.00th=[ 318], 60.00th=[ 322], 00:26:21.667 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 392], 95.00th=[ 420], 00:26:21.667 | 99.00th=[ 490], 99.50th=[ 578], 99.90th=[ 2868], 99.95th=[ 3458], 00:26:21.667 | 99.99th=[ 3458] 00:26:21.667 write: IOPS=1660, BW=6641KiB/s (6801kB/s)(6648KiB/1001msec); 0 zone resets 00:26:21.667 slat (usec): min=22, max=138, avg=41.01, stdev=13.42 00:26:21.667 clat (usec): min=137, max=425, avg=225.87, stdev=26.87 00:26:21.667 lat (usec): min=185, max=523, avg=266.88, stdev=22.57 00:26:21.667 clat percentiles (usec): 00:26:21.667 | 1.00th=[ 159], 5.00th=[ 182], 10.00th=[ 198], 20.00th=[ 206], 00:26:21.667 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:26:21.667 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 269], 00:26:21.667 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 420], 99.95th=[ 424], 00:26:21.667 | 99.99th=[ 424] 00:26:21.667 bw ( KiB/s): min= 8192, max= 8192, per=24.72%, avg=8192.00, stdev= 0.00, samples=1 00:26:21.667 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:26:21.667 lat (usec) : 250=47.09%, 500=52.50%, 750=0.25%, 1000=0.06% 00:26:21.667 lat (msec) : 2=0.03%, 4=0.06% 00:26:21.667 cpu : usr=2.00%, sys=8.70%, ctx=3199, majf=0, minf=11 00:26:21.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.667 issued rwts: total=1536,1662,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:21.667 job3: (groupid=0, jobs=1): err= 0: pid=105710: Fri Dec 6 17:26:03 2024 00:26:21.667 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:26:21.667 slat (usec): min=16, max=227, avg=26.66, stdev= 8.70 00:26:21.667 clat (usec): min=25, max=2361, avg=223.47, stdev=56.27 00:26:21.667 lat (usec): min=204, max=2381, avg=250.13, stdev=56.42 00:26:21.667 clat percentiles (usec): 00:26:21.667 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:26:21.667 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:26:21.667 | 70.00th=[ 225], 80.00th=[ 229], 90.00th=[ 262], 95.00th=[ 285], 00:26:21.667 | 99.00th=[ 322], 99.50th=[ 343], 99.90th=[ 570], 99.95th=[ 611], 00:26:21.667 | 99.99th=[ 2376] 00:26:21.667 write: IOPS=2366, BW=9467KiB/s (9694kB/s)(9476KiB/1001msec); 0 zone resets 00:26:21.667 slat (usec): min=23, max=511, avg=37.25, stdev=19.36 00:26:21.667 clat (usec): min=4, max=955, avg=163.54, stdev=38.56 00:26:21.667 lat (usec): min=155, max=994, avg=200.78, stdev=42.93 00:26:21.667 clat percentiles (usec): 00:26:21.667 | 1.00th=[ 137], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:26:21.667 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:26:21.667 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 198], 00:26:21.667 | 99.00th=[ 289], 99.50th=[ 371], 99.90th=[ 701], 99.95th=[ 824], 00:26:21.667 | 99.99th=[ 955] 00:26:21.667 bw ( KiB/s): min= 9240, max= 9240, per=27.89%, avg=9240.00, stdev= 0.00, samples=1 00:26:21.667 iops : min= 2310, max= 2310, avg=2310.00, stdev= 0.00, samples=1 00:26:21.667 lat (usec) : 10=0.07%, 50=0.05%, 250=94.18%, 500=5.43%, 750=0.20% 00:26:21.667 lat (usec) : 1000=0.05% 00:26:21.667 lat (msec) : 4=0.02% 00:26:21.667 cpu : usr=2.20%, sys=11.10%, ctx=4432, majf=0, minf=21 00:26:21.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.667 issued rwts: total=2048,2369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:21.667 00:26:21.667 Run status group 0 (all jobs): 00:26:21.667 READ: bw=28.4MiB/s (29.8MB/s), 6138KiB/s-8667KiB/s (6285kB/s-8875kB/s), io=28.5MiB (29.9MB), run=1001-1001msec 00:26:21.667 WRITE: bw=32.4MiB/s (33.9MB/s), 6641KiB/s-9.99MiB/s (6801kB/s-10.5MB/s), io=32.4MiB (34.0MB), run=1001-1001msec 00:26:21.667 00:26:21.667 Disk stats (read/write): 00:26:21.667 nvme0n1: ios=1309/1536, merge=0/0, ticks=439/362, in_queue=801, util=87.88% 00:26:21.667 nvme0n2: ios=2002/2048, merge=0/0, ticks=489/351, in_queue=840, util=88.75% 00:26:21.667 nvme0n3: ios=1227/1536, merge=0/0, ticks=424/378, in_queue=802, util=89.10% 00:26:21.667 nvme0n4: ios=1750/2048, merge=0/0, ticks=404/359, in_queue=763, util=89.63% 00:26:21.667 17:26:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:26:21.667 [global] 00:26:21.667 thread=1 00:26:21.667 invalidate=1 00:26:21.667 rw=write 00:26:21.667 time_based=1 00:26:21.667 runtime=1 00:26:21.667 ioengine=libaio 00:26:21.667 direct=1 00:26:21.667 bs=4096 00:26:21.667 iodepth=128 00:26:21.667 norandommap=0 00:26:21.667 numjobs=1 00:26:21.667 00:26:21.667 verify_dump=1 00:26:21.667 verify_backlog=512 00:26:21.667 verify_state_save=0 00:26:21.667 do_verify=1 00:26:21.667 verify=crc32c-intel 00:26:21.667 [job0] 00:26:21.667 filename=/dev/nvme0n1 00:26:21.667 [job1] 00:26:21.667 filename=/dev/nvme0n2 00:26:21.667 [job2] 00:26:21.667 filename=/dev/nvme0n3 00:26:21.667 [job3] 00:26:21.667 filename=/dev/nvme0n4 00:26:21.667 Could not set queue depth (nvme0n1) 00:26:21.667 Could not set queue depth (nvme0n2) 00:26:21.667 Could not set queue depth (nvme0n3) 00:26:21.667 Could not set queue depth (nvme0n4) 00:26:21.668 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:21.668 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:21.668 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:21.668 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:21.668 fio-3.35 00:26:21.668 Starting 4 threads 00:26:23.043 00:26:23.043 job0: (groupid=0, jobs=1): err= 0: pid=105765: Fri Dec 6 17:26:04 2024 00:26:23.043 read: IOPS=2768, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1003msec) 00:26:23.043 slat (usec): min=6, max=6566, avg=158.14, stdev=783.37 00:26:23.043 clat (usec): min=2346, max=30379, avg=19257.05, stdev=4032.50 00:26:23.043 lat (usec): min=2361, max=32649, avg=19415.19, stdev=4090.02 00:26:23.043 clat percentiles (usec): 00:26:23.043 | 1.00th=[ 2999], 5.00th=[13960], 10.00th=[15008], 20.00th=[15795], 00:26:23.043 | 30.00th=[16909], 40.00th=[19006], 50.00th=[19530], 60.00th=[20055], 00:26:23.043 | 70.00th=[21103], 80.00th=[22414], 90.00th=[23987], 95.00th=[26608], 00:26:23.043 | 99.00th=[27657], 99.50th=[28181], 99.90th=[28181], 99.95th=[28181], 00:26:23.043 | 99.99th=[30278] 00:26:23.043 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:26:23.043 slat (usec): min=12, max=7584, avg=174.20, stdev=650.64 00:26:23.043 clat (usec): min=11262, max=38719, avg=23747.18, stdev=7359.48 00:26:23.043 lat (usec): min=11329, max=38765, avg=23921.38, stdev=7414.74 00:26:23.043 clat percentiles (usec): 00:26:23.043 | 1.00th=[13042], 5.00th=[14615], 10.00th=[15139], 20.00th=[16581], 00:26:23.043 | 30.00th=[19006], 40.00th=[19530], 50.00th=[21627], 60.00th=[22676], 00:26:23.043 | 70.00th=[30278], 80.00th=[33162], 90.00th=[34341], 95.00th=[35390], 00:26:23.043 | 99.00th=[36963], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:26:23.043 | 99.99th=[38536] 00:26:23.043 bw ( KiB/s): min=12288, max=12288, per=19.96%, avg=12288.00, stdev= 0.00, samples=2 00:26:23.043 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:26:23.043 lat (msec) : 4=0.62%, 10=0.38%, 20=49.55%, 50=49.46% 00:26:23.043 cpu : usr=2.99%, sys=9.48%, ctx=347, majf=0, minf=11 00:26:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:26:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:23.043 issued rwts: total=2777,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:23.043 job1: (groupid=0, jobs=1): err= 0: pid=105766: Fri Dec 6 17:26:04 2024 00:26:23.043 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:26:23.043 slat (usec): min=6, max=11594, avg=183.83, stdev=943.87 00:26:23.043 clat (usec): min=13120, max=43336, avg=23762.85, stdev=6605.50 00:26:23.043 lat (usec): min=14138, max=43355, avg=23946.68, stdev=6606.49 00:26:23.043 clat percentiles (usec): 00:26:23.043 | 1.00th=[15270], 5.00th=[16450], 10.00th=[16909], 20.00th=[17695], 00:26:23.043 | 30.00th=[18482], 40.00th=[20317], 50.00th=[22938], 60.00th=[25560], 00:26:23.043 | 70.00th=[26346], 80.00th=[28967], 90.00th=[33817], 95.00th=[36439], 00:26:23.043 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:26:23.043 | 99.99th=[43254] 00:26:23.043 write: IOPS=2680, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1003msec); 0 zone resets 00:26:23.043 slat (usec): min=12, max=7110, avg=187.83, stdev=722.97 00:26:23.043 clat (usec): min=2548, max=42480, avg=24380.11, stdev=8149.95 00:26:23.043 lat (usec): min=7871, max=42528, avg=24567.95, stdev=8176.91 00:26:23.043 clat percentiles (usec): 00:26:23.043 | 1.00th=[ 8586], 5.00th=[15664], 10.00th=[16057], 20.00th=[16909], 00:26:23.043 | 30.00th=[17957], 40.00th=[21890], 50.00th=[22676], 60.00th=[24773], 00:26:23.043 | 70.00th=[25560], 80.00th=[31589], 90.00th=[39584], 95.00th=[41157], 00:26:23.043 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:26:23.043 | 99.99th=[42730] 00:26:23.043 bw ( KiB/s): min= 8208, max=12288, per=16.65%, avg=10248.00, stdev=2885.00, samples=2 00:26:23.043 iops : min= 2052, max= 3072, avg=2562.00, stdev=721.25, samples=2 00:26:23.043 lat (msec) : 4=0.02%, 10=0.61%, 20=35.82%, 50=63.55% 00:26:23.043 cpu : usr=2.59%, sys=8.88%, ctx=279, majf=0, minf=11 00:26:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:23.043 issued rwts: total=2560,2689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:23.043 job2: (groupid=0, jobs=1): err= 0: pid=105767: Fri Dec 6 17:26:04 2024 00:26:23.043 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:26:23.043 slat (usec): min=5, max=6057, avg=103.07, stdev=553.48 00:26:23.043 clat (usec): min=9166, max=20511, avg=13565.57, stdev=1906.86 00:26:23.043 lat (usec): min=9187, max=20525, avg=13668.64, stdev=1929.23 00:26:23.043 clat percentiles (usec): 00:26:23.043 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[11076], 20.00th=[11994], 00:26:23.043 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13304], 60.00th=[13829], 00:26:23.043 | 70.00th=[14484], 80.00th=[15139], 90.00th=[16188], 95.00th=[16909], 00:26:23.043 | 99.00th=[18482], 99.50th=[19006], 99.90th=[19006], 99.95th=[20579], 00:26:23.043 | 99.99th=[20579] 00:26:23.043 write: IOPS=4911, BW=19.2MiB/s (20.1MB/s)(19.2MiB/1001msec); 0 zone resets 00:26:23.043 slat (usec): min=9, max=6213, avg=98.75, stdev=531.26 00:26:23.043 clat (usec): min=459, max=20833, avg=13014.91, stdev=1689.25 00:26:23.043 lat (usec): min=5122, max=20885, avg=13113.67, stdev=1756.58 00:26:23.043 clat percentiles (usec): 00:26:23.043 | 1.00th=[ 6325], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:26:23.043 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:26:23.043 | 70.00th=[13698], 80.00th=[14222], 90.00th=[15008], 95.00th=[15270], 00:26:23.043 | 99.00th=[17957], 99.50th=[18220], 99.90th=[20317], 99.95th=[20579], 00:26:23.043 | 99.99th=[20841] 00:26:23.043 bw ( KiB/s): min=19256, max=19256, per=31.28%, avg=19256.00, stdev= 0.00, samples=1 00:26:23.043 iops : min= 4814, max= 4814, avg=4814.00, stdev= 0.00, samples=1 00:26:23.043 lat (usec) : 500=0.01% 00:26:23.043 lat (msec) : 10=2.60%, 20=97.26%, 50=0.13% 00:26:23.043 cpu : usr=3.90%, sys=14.70%, ctx=382, majf=0, minf=16 00:26:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:23.043 issued rwts: total=4608,4916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:23.043 job3: (groupid=0, jobs=1): err= 0: pid=105768: Fri Dec 6 17:26:04 2024 00:26:23.043 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:26:23.043 slat (usec): min=6, max=3427, avg=104.66, stdev=481.55 00:26:23.043 clat (usec): min=10241, max=17145, avg=13884.91, stdev=902.18 00:26:23.043 lat (usec): min=10808, max=17953, avg=13989.57, stdev=821.78 00:26:23.043 clat percentiles (usec): 00:26:23.043 | 1.00th=[11076], 5.00th=[12256], 10.00th=[12911], 20.00th=[13435], 00:26:23.043 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13960], 60.00th=[14091], 00:26:23.043 | 70.00th=[14353], 80.00th=[14615], 90.00th=[14877], 95.00th=[15139], 00:26:23.043 | 99.00th=[16188], 99.50th=[16319], 99.90th=[16909], 99.95th=[17171], 00:26:23.043 | 99.99th=[17171] 00:26:23.043 write: IOPS=4752, BW=18.6MiB/s (19.5MB/s)(18.6MiB/1001msec); 0 zone resets 00:26:23.043 slat (usec): min=9, max=4380, avg=100.57, stdev=441.38 00:26:23.043 clat (usec): min=490, max=16437, avg=13110.17, stdev=1605.57 00:26:23.043 lat (usec): min=506, max=16458, avg=13210.74, stdev=1591.85 00:26:23.043 clat percentiles (usec): 00:26:23.043 | 1.00th=[ 7963], 5.00th=[10945], 10.00th=[11338], 20.00th=[11731], 00:26:23.043 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13435], 60.00th=[13829], 00:26:23.043 | 70.00th=[14091], 80.00th=[14484], 90.00th=[15008], 95.00th=[15270], 00:26:23.043 | 99.00th=[15926], 99.50th=[16057], 99.90th=[16450], 99.95th=[16450], 00:26:23.043 | 99.99th=[16450] 00:26:23.043 bw ( KiB/s): min=17152, max=19927, per=30.12%, avg=18539.50, stdev=1962.22, samples=2 00:26:23.043 iops : min= 4288, max= 4981, avg=4634.50, stdev=490.02, samples=2 00:26:23.043 lat (usec) : 500=0.01%, 750=0.03% 00:26:23.043 lat (msec) : 10=0.63%, 20=99.33% 00:26:23.043 cpu : usr=4.00%, sys=14.20%, ctx=472, majf=0, minf=15 00:26:23.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:26:23.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:23.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:23.043 issued rwts: total=4608,4757,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:23.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:23.043 00:26:23.043 Run status group 0 (all jobs): 00:26:23.043 READ: bw=56.7MiB/s (59.4MB/s), 9.97MiB/s-18.0MiB/s (10.5MB/s-18.9MB/s), io=56.8MiB (59.6MB), run=1001-1003msec 00:26:23.043 WRITE: bw=60.1MiB/s (63.0MB/s), 10.5MiB/s-19.2MiB/s (11.0MB/s-20.1MB/s), io=60.3MiB (63.2MB), run=1001-1003msec 00:26:23.043 00:26:23.043 Disk stats (read/write): 00:26:23.043 nvme0n1: ios=2456/2560, merge=0/0, ticks=15049/19593, in_queue=34642, util=89.08% 00:26:23.043 nvme0n2: ios=2193/2560, merge=0/0, ticks=11076/14895, in_queue=25971, util=89.51% 00:26:23.043 nvme0n3: ios=4086/4096, merge=0/0, ticks=26112/23660, in_queue=49772, util=90.35% 00:26:23.043 nvme0n4: ios=3993/4096, merge=0/0, ticks=12660/11840, in_queue=24500, util=89.68% 00:26:23.043 17:26:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:26:23.043 [global] 00:26:23.043 thread=1 00:26:23.043 invalidate=1 00:26:23.043 rw=randwrite 00:26:23.043 time_based=1 00:26:23.043 runtime=1 00:26:23.043 ioengine=libaio 00:26:23.043 direct=1 00:26:23.043 bs=4096 00:26:23.043 iodepth=128 00:26:23.043 norandommap=0 00:26:23.043 numjobs=1 00:26:23.043 00:26:23.043 verify_dump=1 00:26:23.043 verify_backlog=512 00:26:23.043 verify_state_save=0 00:26:23.043 do_verify=1 00:26:23.043 verify=crc32c-intel 00:26:23.043 [job0] 00:26:23.043 filename=/dev/nvme0n1 00:26:23.043 [job1] 00:26:23.043 filename=/dev/nvme0n2 00:26:23.043 [job2] 00:26:23.043 filename=/dev/nvme0n3 00:26:23.043 [job3] 00:26:23.044 filename=/dev/nvme0n4 00:26:23.044 Could not set queue depth (nvme0n1) 00:26:23.044 Could not set queue depth (nvme0n2) 00:26:23.044 Could not set queue depth (nvme0n3) 00:26:23.044 Could not set queue depth (nvme0n4) 00:26:23.044 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:23.044 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:23.044 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:23.044 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:26:23.044 fio-3.35 00:26:23.044 Starting 4 threads 00:26:24.423 00:26:24.423 job0: (groupid=0, jobs=1): err= 0: pid=105827: Fri Dec 6 17:26:06 2024 00:26:24.423 read: IOPS=5238, BW=20.5MiB/s (21.5MB/s)(20.5MiB/1004msec) 00:26:24.423 slat (usec): min=6, max=3791, avg=90.76, stdev=427.54 00:26:24.424 clat (usec): min=637, max=16490, avg=11699.64, stdev=1426.85 00:26:24.424 lat (usec): min=4025, max=16517, avg=11790.40, stdev=1451.00 00:26:24.424 clat percentiles (usec): 00:26:24.424 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:26:24.424 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:26:24.424 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13173], 95.00th=[13829], 00:26:24.424 | 99.00th=[15270], 99.50th=[15533], 99.90th=[16188], 99.95th=[16450], 00:26:24.424 | 99.99th=[16450] 00:26:24.424 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:26:24.424 slat (usec): min=10, max=3702, avg=85.25, stdev=321.35 00:26:24.424 clat (usec): min=7665, max=16729, avg=11601.15, stdev=1383.44 00:26:24.424 lat (usec): min=7692, max=16778, avg=11686.40, stdev=1384.89 00:26:24.424 clat percentiles (usec): 00:26:24.424 | 1.00th=[ 8291], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10814], 00:26:24.424 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:26:24.424 | 70.00th=[12256], 80.00th=[12780], 90.00th=[13173], 95.00th=[13698], 00:26:24.424 | 99.00th=[15401], 99.50th=[15664], 99.90th=[16450], 99.95th=[16581], 00:26:24.424 | 99.99th=[16712] 00:26:24.424 bw ( KiB/s): min=20798, max=24216, per=34.60%, avg=22507.00, stdev=2416.89, samples=2 00:26:24.424 iops : min= 5199, max= 6054, avg=5626.50, stdev=604.58, samples=2 00:26:24.424 lat (usec) : 750=0.01% 00:26:24.424 lat (msec) : 10=9.47%, 20=90.52% 00:26:24.424 cpu : usr=5.08%, sys=15.25%, ctx=657, majf=0, minf=5 00:26:24.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:26:24.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:24.424 issued rwts: total=5259,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:24.424 job1: (groupid=0, jobs=1): err= 0: pid=105828: Fri Dec 6 17:26:06 2024 00:26:24.424 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:26:24.424 slat (usec): min=5, max=8711, avg=160.55, stdev=837.10 00:26:24.424 clat (usec): min=9646, max=36736, avg=20924.81, stdev=7972.72 00:26:24.424 lat (usec): min=9665, max=36767, avg=21085.36, stdev=8013.45 00:26:24.424 clat percentiles (usec): 00:26:24.424 | 1.00th=[10028], 5.00th=[11469], 10.00th=[11863], 20.00th=[12518], 00:26:24.424 | 30.00th=[13698], 40.00th=[14615], 50.00th=[20579], 60.00th=[26608], 00:26:24.424 | 70.00th=[27395], 80.00th=[29492], 90.00th=[30802], 95.00th=[31851], 00:26:24.424 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36963], 00:26:24.424 | 99.99th=[36963] 00:26:24.424 write: IOPS=3278, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1005msec); 0 zone resets 00:26:24.424 slat (usec): min=4, max=6778, avg=145.53, stdev=555.58 00:26:24.424 clat (usec): min=4684, max=30828, avg=19027.59, stdev=5992.84 00:26:24.424 lat (usec): min=4741, max=30848, avg=19173.12, stdev=6035.73 00:26:24.424 clat percentiles (usec): 00:26:24.424 | 1.00th=[ 8717], 5.00th=[12256], 10.00th=[12649], 20.00th=[13304], 00:26:24.424 | 30.00th=[13829], 40.00th=[14484], 50.00th=[19268], 60.00th=[22414], 00:26:24.424 | 70.00th=[23200], 80.00th=[25297], 90.00th=[27395], 95.00th=[28181], 00:26:24.424 | 99.00th=[29492], 99.50th=[30278], 99.90th=[30802], 99.95th=[30802], 00:26:24.424 | 99.99th=[30802] 00:26:24.424 bw ( KiB/s): min= 8960, max=16318, per=19.43%, avg=12639.00, stdev=5202.89, samples=2 00:26:24.424 iops : min= 2240, max= 4079, avg=3159.50, stdev=1300.37, samples=2 00:26:24.424 lat (msec) : 10=1.01%, 20=49.63%, 50=49.36% 00:26:24.424 cpu : usr=3.19%, sys=9.46%, ctx=655, majf=0, minf=5 00:26:24.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:24.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:24.424 issued rwts: total=3072,3295,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:24.424 job2: (groupid=0, jobs=1): err= 0: pid=105829: Fri Dec 6 17:26:06 2024 00:26:24.424 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:26:24.424 slat (usec): min=4, max=7386, avg=119.62, stdev=621.34 00:26:24.424 clat (usec): min=9676, max=23763, avg=15133.27, stdev=2333.08 00:26:24.424 lat (usec): min=9701, max=23789, avg=15252.90, stdev=2358.84 00:26:24.424 clat percentiles (usec): 00:26:24.424 | 1.00th=[10159], 5.00th=[11600], 10.00th=[12387], 20.00th=[13173], 00:26:24.424 | 30.00th=[13829], 40.00th=[14484], 50.00th=[14877], 60.00th=[15401], 00:26:24.424 | 70.00th=[16319], 80.00th=[16909], 90.00th=[18220], 95.00th=[19268], 00:26:24.424 | 99.00th=[21365], 99.50th=[22152], 99.90th=[23462], 99.95th=[23462], 00:26:24.424 | 99.99th=[23725] 00:26:24.424 write: IOPS=4332, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1003msec); 0 zone resets 00:26:24.424 slat (usec): min=11, max=6713, avg=109.12, stdev=460.64 00:26:24.424 clat (usec): min=600, max=23825, avg=14898.97, stdev=2222.49 00:26:24.424 lat (usec): min=6463, max=24277, avg=15008.10, stdev=2253.91 00:26:24.424 clat percentiles (usec): 00:26:24.424 | 1.00th=[ 7373], 5.00th=[11731], 10.00th=[12911], 20.00th=[13435], 00:26:24.424 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14877], 60.00th=[15533], 00:26:24.424 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16712], 95.00th=[19006], 00:26:24.424 | 99.00th=[22152], 99.50th=[22938], 99.90th=[23725], 99.95th=[23725], 00:26:24.424 | 99.99th=[23725] 00:26:24.424 bw ( KiB/s): min=16534, max=17176, per=25.91%, avg=16855.00, stdev=453.96, samples=2 00:26:24.424 iops : min= 4133, max= 4294, avg=4213.50, stdev=113.84, samples=2 00:26:24.424 lat (usec) : 750=0.01% 00:26:24.424 lat (msec) : 10=1.52%, 20=95.63%, 50=2.84% 00:26:24.424 cpu : usr=3.69%, sys=13.47%, ctx=527, majf=0, minf=6 00:26:24.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:26:24.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:24.424 issued rwts: total=4096,4345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:24.424 job3: (groupid=0, jobs=1): err= 0: pid=105830: Fri Dec 6 17:26:06 2024 00:26:24.424 read: IOPS=2707, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1005msec) 00:26:24.424 slat (usec): min=3, max=9078, avg=187.57, stdev=807.31 00:26:24.424 clat (usec): min=2906, max=39173, avg=23607.16, stdev=6971.25 00:26:24.424 lat (usec): min=7079, max=39201, avg=23794.73, stdev=6998.03 00:26:24.424 clat percentiles (usec): 00:26:24.424 | 1.00th=[ 7635], 5.00th=[14615], 10.00th=[15795], 20.00th=[16188], 00:26:24.424 | 30.00th=[16581], 40.00th=[19006], 50.00th=[25822], 60.00th=[27657], 00:26:24.424 | 70.00th=[28967], 80.00th=[29754], 90.00th=[31065], 95.00th=[33424], 00:26:24.424 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:26:24.424 | 99.99th=[39060] 00:26:24.424 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:26:24.424 slat (usec): min=11, max=6267, avg=152.05, stdev=634.71 00:26:24.424 clat (usec): min=12021, max=36048, avg=20360.48, stdev=4944.91 00:26:24.424 lat (usec): min=12634, max=36065, avg=20512.53, stdev=4971.56 00:26:24.424 clat percentiles (usec): 00:26:24.424 | 1.00th=[12780], 5.00th=[15270], 10.00th=[15401], 20.00th=[15533], 00:26:24.424 | 30.00th=[15664], 40.00th=[16057], 50.00th=[20317], 60.00th=[22414], 00:26:24.424 | 70.00th=[23725], 80.00th=[25297], 90.00th=[26870], 95.00th=[28443], 00:26:24.424 | 99.00th=[32375], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:26:24.424 | 99.99th=[35914] 00:26:24.424 bw ( KiB/s): min=10216, max=14331, per=18.87%, avg=12273.50, stdev=2909.74, samples=2 00:26:24.424 iops : min= 2554, max= 3582, avg=3068.00, stdev=726.91, samples=2 00:26:24.424 lat (msec) : 4=0.02%, 10=0.55%, 20=44.26%, 50=55.17% 00:26:24.424 cpu : usr=2.89%, sys=8.57%, ctx=563, majf=0, minf=3 00:26:24.424 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:24.424 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:24.424 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:24.424 issued rwts: total=2721,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:24.424 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:24.424 00:26:24.424 Run status group 0 (all jobs): 00:26:24.424 READ: bw=58.9MiB/s (61.7MB/s), 10.6MiB/s-20.5MiB/s (11.1MB/s-21.5MB/s), io=59.2MiB (62.0MB), run=1003-1005msec 00:26:24.424 WRITE: bw=63.5MiB/s (66.6MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=63.8MiB (66.9MB), run=1003-1005msec 00:26:24.424 00:26:24.424 Disk stats (read/write): 00:26:24.424 nvme0n1: ios=4515/4608, merge=0/0, ticks=16323/15970, in_queue=32293, util=86.36% 00:26:24.424 nvme0n2: ios=2606/3060, merge=0/0, ticks=13241/14596, in_queue=27837, util=87.61% 00:26:24.424 nvme0n3: ios=3535/3584, merge=0/0, ticks=25558/23823, in_queue=49381, util=88.68% 00:26:24.424 nvme0n4: ios=2446/2560, merge=0/0, ticks=13896/10933, in_queue=24829, util=89.42% 00:26:24.424 17:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:26:24.424 17:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=105843 00:26:24.424 17:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:26:24.424 17:26:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:26:24.424 [global] 00:26:24.424 thread=1 00:26:24.424 invalidate=1 00:26:24.424 rw=read 00:26:24.424 time_based=1 00:26:24.424 runtime=10 00:26:24.424 ioengine=libaio 00:26:24.424 direct=1 00:26:24.424 bs=4096 00:26:24.424 iodepth=1 00:26:24.424 norandommap=1 00:26:24.424 numjobs=1 00:26:24.424 00:26:24.424 [job0] 00:26:24.424 filename=/dev/nvme0n1 00:26:24.424 [job1] 00:26:24.424 filename=/dev/nvme0n2 00:26:24.424 [job2] 00:26:24.424 filename=/dev/nvme0n3 00:26:24.424 [job3] 00:26:24.424 filename=/dev/nvme0n4 00:26:24.424 Could not set queue depth (nvme0n1) 00:26:24.424 Could not set queue depth (nvme0n2) 00:26:24.424 Could not set queue depth (nvme0n3) 00:26:24.424 Could not set queue depth (nvme0n4) 00:26:24.424 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:24.425 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:24.425 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:24.425 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:24.425 fio-3.35 00:26:24.425 Starting 4 threads 00:26:27.714 17:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:26:27.714 fio: pid=105886, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:27.714 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=48480256, buflen=4096 00:26:27.714 17:26:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:26:27.971 fio: pid=105885, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:27.971 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=56721408, buflen=4096 00:26:27.971 17:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:27.971 17:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:26:28.228 fio: pid=105883, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:28.228 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=47771648, buflen=4096 00:26:28.486 17:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:28.486 17:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:26:28.744 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=54689792, buflen=4096 00:26:28.744 fio: pid=105884, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:26:28.744 17:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:28.744 17:26:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:26:28.744 00:26:28.744 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105883: Fri Dec 6 17:26:10 2024 00:26:28.744 read: IOPS=3083, BW=12.0MiB/s (12.6MB/s)(45.6MiB/3783msec) 00:26:28.744 slat (usec): min=4, max=13857, avg=22.73, stdev=209.92 00:26:28.744 clat (usec): min=163, max=2523, avg=299.65, stdev=65.63 00:26:28.744 lat (usec): min=179, max=14090, avg=322.39, stdev=218.67 00:26:28.744 clat percentiles (usec): 00:26:28.744 | 1.00th=[ 180], 5.00th=[ 196], 10.00th=[ 219], 20.00th=[ 273], 00:26:28.744 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:26:28.744 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 371], 95.00th=[ 408], 00:26:28.744 | 99.00th=[ 506], 99.50th=[ 529], 99.90th=[ 693], 99.95th=[ 889], 00:26:28.744 | 99.99th=[ 1729] 00:26:28.744 bw ( KiB/s): min=10671, max=13632, per=25.25%, avg=12413.57, stdev=891.86, samples=7 00:26:28.744 iops : min= 2667, max= 3408, avg=3103.29, stdev=223.21, samples=7 00:26:28.744 lat (usec) : 250=12.95%, 500=85.88%, 750=1.10%, 1000=0.04% 00:26:28.744 lat (msec) : 2=0.02%, 4=0.01% 00:26:28.744 cpu : usr=1.14%, sys=4.79%, ctx=11902, majf=0, minf=1 00:26:28.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.744 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.744 issued rwts: total=11664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.744 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105884: Fri Dec 6 17:26:10 2024 00:26:28.744 read: IOPS=3236, BW=12.6MiB/s (13.3MB/s)(52.2MiB/4126msec) 00:26:28.744 slat (usec): min=6, max=11755, avg=22.07, stdev=193.43 00:26:28.744 clat (usec): min=157, max=2561, avg=285.21, stdev=74.91 00:26:28.744 lat (usec): min=177, max=11979, avg=307.28, stdev=208.08 00:26:28.744 clat percentiles (usec): 00:26:28.744 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 188], 20.00th=[ 221], 00:26:28.744 | 30.00th=[ 269], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 297], 00:26:28.744 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 363], 95.00th=[ 400], 00:26:28.744 | 99.00th=[ 494], 99.50th=[ 523], 99.90th=[ 685], 99.95th=[ 824], 00:26:28.744 | 99.99th=[ 2540] 00:26:28.744 bw ( KiB/s): min=10769, max=15848, per=26.49%, avg=13020.12, stdev=1614.91, samples=8 00:26:28.744 iops : min= 2692, max= 3962, avg=3255.00, stdev=403.78, samples=8 00:26:28.744 lat (usec) : 250=23.35%, 500=75.80%, 750=0.78%, 1000=0.02% 00:26:28.744 lat (msec) : 2=0.03%, 4=0.01% 00:26:28.744 cpu : usr=1.09%, sys=4.80%, ctx=13537, majf=0, minf=2 00:26:28.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.744 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.744 issued rwts: total=13353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.744 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105885: Fri Dec 6 17:26:10 2024 00:26:28.744 read: IOPS=4054, BW=15.8MiB/s (16.6MB/s)(54.1MiB/3416msec) 00:26:28.744 slat (usec): min=5, max=11243, avg=22.20, stdev=116.23 00:26:28.744 clat (usec): min=172, max=3025, avg=222.82, stdev=61.33 00:26:28.744 lat (usec): min=189, max=11535, avg=245.02, stdev=133.19 00:26:28.744 clat percentiles (usec): 00:26:28.744 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:26:28.744 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:26:28.744 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 314], 00:26:28.744 | 99.00th=[ 474], 99.50th=[ 506], 99.90th=[ 660], 99.95th=[ 857], 00:26:28.744 | 99.99th=[ 2704] 00:26:28.744 bw ( KiB/s): min=16128, max=17568, per=34.42%, avg=16917.33, stdev=620.21, samples=6 00:26:28.744 iops : min= 4032, max= 4392, avg=4229.33, stdev=155.05, samples=6 00:26:28.744 lat (usec) : 250=90.69%, 500=8.72%, 750=0.51%, 1000=0.03% 00:26:28.744 lat (msec) : 2=0.01%, 4=0.02% 00:26:28.744 cpu : usr=1.32%, sys=6.85%, ctx=14036, majf=0, minf=1 00:26:28.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.744 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.744 issued rwts: total=13849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.744 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105886: Fri Dec 6 17:26:10 2024 00:26:28.744 read: IOPS=3887, BW=15.2MiB/s (15.9MB/s)(46.2MiB/3045msec) 00:26:28.744 slat (nsec): min=14354, max=94702, avg=20157.48, stdev=6095.02 00:26:28.744 clat (usec): min=180, max=2183, avg=235.35, stdev=41.28 00:26:28.744 lat (usec): min=210, max=2204, avg=255.51, stdev=42.77 00:26:28.744 clat percentiles (usec): 00:26:28.744 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:26:28.744 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 231], 00:26:28.744 | 70.00th=[ 237], 80.00th=[ 245], 90.00th=[ 265], 95.00th=[ 310], 00:26:28.744 | 99.00th=[ 359], 99.50th=[ 371], 99.90th=[ 594], 99.95th=[ 783], 00:26:28.744 | 99.99th=[ 1500] 00:26:28.744 bw ( KiB/s): min=14488, max=16200, per=31.69%, avg=15578.67, stdev=672.45, samples=6 00:26:28.744 iops : min= 3622, max= 4050, avg=3894.67, stdev=168.11, samples=6 00:26:28.744 lat (usec) : 250=84.52%, 500=15.33%, 750=0.08%, 1000=0.04% 00:26:28.744 lat (msec) : 2=0.01%, 4=0.01% 00:26:28.744 cpu : usr=1.31%, sys=6.31%, ctx=11839, majf=0, minf=2 00:26:28.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.744 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.744 issued rwts: total=11837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.744 00:26:28.744 Run status group 0 (all jobs): 00:26:28.744 READ: bw=48.0MiB/s (50.3MB/s), 12.0MiB/s-15.8MiB/s (12.6MB/s-16.6MB/s), io=198MiB (208MB), run=3045-4126msec 00:26:28.744 00:26:28.744 Disk stats (read/write): 00:26:28.744 nvme0n1: ios=11158/0, merge=0/0, ticks=3368/0, in_queue=3368, util=95.46% 00:26:28.744 nvme0n2: ios=13319/0, merge=0/0, ticks=3839/0, in_queue=3839, util=95.96% 00:26:28.744 nvme0n3: ios=13727/0, merge=0/0, ticks=3109/0, in_queue=3109, util=96.51% 00:26:28.744 nvme0n4: ios=11144/0, merge=0/0, ticks=2682/0, in_queue=2682, util=96.76% 00:26:29.002 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:29.002 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:26:29.259 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:29.259 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:26:29.824 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:29.824 17:26:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:26:30.082 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:26:30.082 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:26:30.339 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:26:30.339 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 105843 00:26:30.339 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:26:30.339 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:30.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:30.339 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:30.339 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.339 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:30.340 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.340 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.340 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:30.340 nvmf hotplug test: fio failed as expected 00:26:30.340 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:26:30.340 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:26:30.340 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:26:30.340 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.905 17:26:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.905 rmmod nvme_tcp 00:26:30.905 rmmod nvme_fabrics 00:26:30.905 rmmod nvme_keyring 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 105366 ']' 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 105366 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 105366 ']' 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 105366 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105366 00:26:30.905 killing process with pid 105366 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105366' 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 105366 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 105366 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:30.905 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:26:31.163 ************************************ 00:26:31.163 END TEST nvmf_fio_target 00:26:31.163 ************************************ 00:26:31.163 00:26:31.163 real 0m20.644s 00:26:31.163 user 1m2.452s 00:26:31.163 sys 0m12.639s 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:31.163 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:31.422 ************************************ 00:26:31.422 START TEST nvmf_bdevio 00:26:31.422 ************************************ 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:26:31.422 * Looking for test storage... 00:26:31.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:31.422 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.423 --rc genhtml_branch_coverage=1 00:26:31.423 --rc genhtml_function_coverage=1 00:26:31.423 --rc genhtml_legend=1 00:26:31.423 --rc geninfo_all_blocks=1 00:26:31.423 --rc geninfo_unexecuted_blocks=1 00:26:31.423 00:26:31.423 ' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.423 --rc genhtml_branch_coverage=1 00:26:31.423 --rc genhtml_function_coverage=1 00:26:31.423 --rc genhtml_legend=1 00:26:31.423 --rc geninfo_all_blocks=1 00:26:31.423 --rc geninfo_unexecuted_blocks=1 00:26:31.423 00:26:31.423 ' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.423 --rc genhtml_branch_coverage=1 00:26:31.423 --rc genhtml_function_coverage=1 00:26:31.423 --rc genhtml_legend=1 00:26:31.423 --rc geninfo_all_blocks=1 00:26:31.423 --rc geninfo_unexecuted_blocks=1 00:26:31.423 00:26:31.423 ' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:31.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:31.423 --rc genhtml_branch_coverage=1 00:26:31.423 --rc genhtml_function_coverage=1 00:26:31.423 --rc genhtml_legend=1 00:26:31.423 --rc geninfo_all_blocks=1 00:26:31.423 --rc geninfo_unexecuted_blocks=1 00:26:31.423 00:26:31.423 ' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:31.423 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:31.685 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:31.686 Cannot find device "nvmf_init_br" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:31.686 Cannot find device "nvmf_init_br2" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:31.686 Cannot find device "nvmf_tgt_br" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.686 Cannot find device "nvmf_tgt_br2" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:31.686 Cannot find device "nvmf_init_br" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:31.686 Cannot find device "nvmf_init_br2" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:31.686 Cannot find device "nvmf_tgt_br" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:31.686 Cannot find device "nvmf_tgt_br2" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:31.686 Cannot find device "nvmf_br" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:31.686 Cannot find device "nvmf_init_if" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:31.686 Cannot find device "nvmf_init_if2" 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:31.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:31.686 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:31.945 17:26:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:31.945 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:31.945 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:26:31.945 00:26:31.945 --- 10.0.0.3 ping statistics --- 00:26:31.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.945 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:31.945 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:31.945 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:26:31.945 00:26:31.945 --- 10.0.0.4 ping statistics --- 00:26:31.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.945 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:31.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:26:31.945 00:26:31.945 --- 10.0.0.1 ping statistics --- 00:26:31.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.945 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:31.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:26:31.945 00:26:31.945 --- 10.0.0.2 ping statistics --- 00:26:31.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.945 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=106272 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 106272 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 106272 ']' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.945 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.946 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:32.205 [2024-12-06 17:26:14.272090] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:32.205 [2024-12-06 17:26:14.273424] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:26:32.205 [2024-12-06 17:26:14.273504] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.205 [2024-12-06 17:26:14.435049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.205 [2024-12-06 17:26:14.471101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.205 [2024-12-06 17:26:14.471178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.205 [2024-12-06 17:26:14.471191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.206 [2024-12-06 17:26:14.471199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.206 [2024-12-06 17:26:14.471207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.206 [2024-12-06 17:26:14.472405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:32.206 [2024-12-06 17:26:14.472559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:32.206 [2024-12-06 17:26:14.472629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:32.206 [2024-12-06 17:26:14.472632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.464 [2024-12-06 17:26:14.526346] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:32.464 [2024-12-06 17:26:14.526755] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:32.464 [2024-12-06 17:26:14.527447] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:32.464 [2024-12-06 17:26:14.527510] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:32.464 [2024-12-06 17:26:14.527511] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:32.465 [2024-12-06 17:26:14.629878] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:32.465 Malloc0 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:32.465 [2024-12-06 17:26:14.710014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:32.465 { 00:26:32.465 "params": { 00:26:32.465 "name": "Nvme$subsystem", 00:26:32.465 "trtype": "$TEST_TRANSPORT", 00:26:32.465 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.465 "adrfam": "ipv4", 00:26:32.465 "trsvcid": "$NVMF_PORT", 00:26:32.465 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.465 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.465 "hdgst": ${hdgst:-false}, 00:26:32.465 "ddgst": ${ddgst:-false} 00:26:32.465 }, 00:26:32.465 "method": "bdev_nvme_attach_controller" 00:26:32.465 } 00:26:32.465 EOF 00:26:32.465 )") 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:26:32.465 17:26:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:32.465 "params": { 00:26:32.465 "name": "Nvme1", 00:26:32.465 "trtype": "tcp", 00:26:32.465 "traddr": "10.0.0.3", 00:26:32.465 "adrfam": "ipv4", 00:26:32.465 "trsvcid": "4420", 00:26:32.465 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:32.465 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:32.465 "hdgst": false, 00:26:32.465 "ddgst": false 00:26:32.465 }, 00:26:32.465 "method": "bdev_nvme_attach_controller" 00:26:32.465 }' 00:26:32.723 [2024-12-06 17:26:14.763664] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:26:32.724 [2024-12-06 17:26:14.763760] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106307 ] 00:26:32.724 [2024-12-06 17:26:14.912965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:32.724 [2024-12-06 17:26:14.961474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.724 [2024-12-06 17:26:14.961636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.724 [2024-12-06 17:26:14.961662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.982 I/O targets: 00:26:32.982 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:26:32.982 00:26:32.982 00:26:32.982 CUnit - A unit testing framework for C - Version 2.1-3 00:26:32.982 http://cunit.sourceforge.net/ 00:26:32.982 00:26:32.982 00:26:32.982 Suite: bdevio tests on: Nvme1n1 00:26:32.982 Test: blockdev write read block ...passed 00:26:32.982 Test: blockdev write zeroes read block ...passed 00:26:32.982 Test: blockdev write zeroes read no split ...passed 00:26:32.982 Test: blockdev write zeroes read split ...passed 00:26:32.982 Test: blockdev write zeroes read split partial ...passed 00:26:32.982 Test: blockdev reset ...[2024-12-06 17:26:15.257700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:32.982 [2024-12-06 17:26:15.258092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4df50 (9): Bad file descriptor 00:26:32.982 [2024-12-06 17:26:15.261933] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:26:32.982 passed 00:26:32.982 Test: blockdev write read 8 blocks ...passed 00:26:32.982 Test: blockdev write read size > 128k ...passed 00:26:32.982 Test: blockdev write read invalid size ...passed 00:26:33.243 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:26:33.243 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:26:33.243 Test: blockdev write read max offset ...passed 00:26:33.243 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:26:33.243 Test: blockdev writev readv 8 blocks ...passed 00:26:33.243 Test: blockdev writev readv 30 x 1block ...passed 00:26:33.243 Test: blockdev writev readv block ...passed 00:26:33.243 Test: blockdev writev readv size > 128k ...passed 00:26:33.243 Test: blockdev writev readv size > 128k in two iovs ...passed 00:26:33.243 Test: blockdev comparev and writev ...[2024-12-06 17:26:15.434991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:33.243 [2024-12-06 17:26:15.435056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.435091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:33.243 [2024-12-06 17:26:15.435105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.435690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:33.243 [2024-12-06 17:26:15.435724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.435746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:33.243 [2024-12-06 17:26:15.435765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.436198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:33.243 [2024-12-06 17:26:15.436231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.436254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:33.243 [2024-12-06 17:26:15.436282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.436788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:33.243 [2024-12-06 17:26:15.436824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.436847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:26:33.243 [2024-12-06 17:26:15.436865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:33.243 passed 00:26:33.243 Test: blockdev nvme passthru rw ...passed 00:26:33.243 Test: blockdev nvme passthru vendor specific ...[2024-12-06 17:26:15.519579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:33.243 [2024-12-06 17:26:15.519625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.519767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:33.243 [2024-12-06 17:26:15.519784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.519912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:33.243 [2024-12-06 17:26:15.519942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:33.243 [2024-12-06 17:26:15.520073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:33.243 [2024-12-06 17:26:15.520098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:33.243 passed 00:26:33.243 Test: blockdev nvme admin passthru ...passed 00:26:33.501 Test: blockdev copy ...passed 00:26:33.501 00:26:33.501 Run Summary: Type Total Ran Passed Failed Inactive 00:26:33.501 suites 1 1 n/a 0 0 00:26:33.501 tests 23 23 23 0 0 00:26:33.501 asserts 152 152 152 0 n/a 00:26:33.501 00:26:33.501 Elapsed time = 0.953 seconds 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:26:33.501 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.502 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:26:33.502 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.502 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.502 rmmod nvme_tcp 00:26:33.502 rmmod nvme_fabrics 00:26:33.759 rmmod nvme_keyring 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 106272 ']' 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 106272 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 106272 ']' 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 106272 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106272 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:26:33.759 killing process with pid 106272 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106272' 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 106272 00:26:33.759 17:26:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 106272 00:26:33.759 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.759 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.759 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.759 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:26:33.759 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:26:33.759 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.760 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.760 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.760 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:33.760 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:33.760 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:26:34.018 00:26:34.018 real 0m2.780s 00:26:34.018 user 0m6.696s 00:26:34.018 sys 0m1.155s 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:26:34.018 ************************************ 00:26:34.018 END TEST nvmf_bdevio 00:26:34.018 ************************************ 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:34.018 00:26:34.018 real 3m32.470s 00:26:34.018 user 9m47.729s 00:26:34.018 sys 1m19.853s 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.018 17:26:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:34.018 ************************************ 00:26:34.018 END TEST nvmf_target_core_interrupt_mode 00:26:34.018 ************************************ 00:26:34.277 17:26:16 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:34.277 17:26:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:34.277 17:26:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:34.277 17:26:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.277 ************************************ 00:26:34.277 START TEST nvmf_interrupt 00:26:34.277 ************************************ 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:26:34.277 * Looking for test storage... 00:26:34.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:34.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.277 --rc genhtml_branch_coverage=1 00:26:34.277 --rc genhtml_function_coverage=1 00:26:34.277 --rc genhtml_legend=1 00:26:34.277 --rc geninfo_all_blocks=1 00:26:34.277 --rc geninfo_unexecuted_blocks=1 00:26:34.277 00:26:34.277 ' 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:34.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.277 --rc genhtml_branch_coverage=1 00:26:34.277 --rc genhtml_function_coverage=1 00:26:34.277 --rc genhtml_legend=1 00:26:34.277 --rc geninfo_all_blocks=1 00:26:34.277 --rc geninfo_unexecuted_blocks=1 00:26:34.277 00:26:34.277 ' 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:34.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.277 --rc genhtml_branch_coverage=1 00:26:34.277 --rc genhtml_function_coverage=1 00:26:34.277 --rc genhtml_legend=1 00:26:34.277 --rc geninfo_all_blocks=1 00:26:34.277 --rc geninfo_unexecuted_blocks=1 00:26:34.277 00:26:34.277 ' 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:34.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:34.277 --rc genhtml_branch_coverage=1 00:26:34.277 --rc genhtml_function_coverage=1 00:26:34.277 --rc genhtml_legend=1 00:26:34.277 --rc geninfo_all_blocks=1 00:26:34.277 --rc geninfo_unexecuted_blocks=1 00:26:34.277 00:26:34.277 ' 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.277 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.535 17:26:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:34.536 Cannot find device "nvmf_init_br" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:34.536 Cannot find device "nvmf_init_br2" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:34.536 Cannot find device "nvmf_tgt_br" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:34.536 Cannot find device "nvmf_tgt_br2" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:34.536 Cannot find device "nvmf_init_br" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:34.536 Cannot find device "nvmf_init_br2" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:34.536 Cannot find device "nvmf_tgt_br" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:34.536 Cannot find device "nvmf_tgt_br2" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:34.536 Cannot find device "nvmf_br" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:34.536 Cannot find device "nvmf_init_if" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:34.536 Cannot find device "nvmf_init_if2" 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:34.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:34.536 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:34.536 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:34.794 17:26:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:34.794 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:34.794 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:34.794 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:34.794 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:34.794 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:34.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:34.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:26:34.794 00:26:34.794 --- 10.0.0.3 ping statistics --- 00:26:34.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.795 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:34.795 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:34.795 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:26:34.795 00:26:34.795 --- 10.0.0.4 ping statistics --- 00:26:34.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.795 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:34.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:34.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:26:34.795 00:26:34.795 --- 10.0.0.1 ping statistics --- 00:26:34.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.795 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:34.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:34.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:26:34.795 00:26:34.795 --- 10.0.0.2 ping statistics --- 00:26:34.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:34.795 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=106555 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 106555 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 106555 ']' 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.795 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:35.053 [2024-12-06 17:26:17.127237] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:35.053 [2024-12-06 17:26:17.128522] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:26:35.053 [2024-12-06 17:26:17.128595] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.053 [2024-12-06 17:26:17.280177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:35.053 [2024-12-06 17:26:17.317863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.053 [2024-12-06 17:26:17.317933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.053 [2024-12-06 17:26:17.317948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:35.053 [2024-12-06 17:26:17.317957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:35.053 [2024-12-06 17:26:17.317967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.053 [2024-12-06 17:26:17.321327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.053 [2024-12-06 17:26:17.321369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.312 [2024-12-06 17:26:17.376818] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:35.312 [2024-12-06 17:26:17.376976] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:35.312 [2024-12-06 17:26:17.377144] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:26:35.312 5000+0 records in 00:26:35.312 5000+0 records out 00:26:35.312 10240000 bytes (10 MB, 9.8 MiB) copied, 0.033414 s, 306 MB/s 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.312 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:35.312 AIO0 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:35.313 [2024-12-06 17:26:17.562241] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:35.313 [2024-12-06 17:26:17.590682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106555 0 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106555 0 idle 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106555 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:35.313 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106555 root 20 0 64.2g 45184 32896 S 0.0 0.4 0:00.22 reactor_0' 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106555 root 20 0 64.2g 45184 32896 S 0.0 0.4 0:00.22 reactor_0 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106555 1 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106555 1 idle 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106555 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:35.572 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106565 root 20 0 64.2g 45184 32896 S 0.0 0.4 0:00.00 reactor_1' 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106565 root 20 0 64.2g 45184 32896 S 0.0 0.4 0:00.00 reactor_1 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=106615 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106555 0 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106555 0 busy 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106555 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:35.831 17:26:17 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106555 root 20 0 64.2g 45184 32896 S 0.0 0.4 0:00.22 reactor_0' 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106555 root 20 0 64.2g 45184 32896 S 0.0 0.4 0:00.22 reactor_0 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:35.832 17:26:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106555 root 20 0 64.2g 46464 33280 R 99.9 0.4 0:01.59 reactor_0' 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106555 root 20 0 64.2g 46464 33280 R 99.9 0.4 0:01.59 reactor_0 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106555 1 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106555 1 busy 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106555 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106565 root 20 0 64.2g 46464 33280 R 66.7 0.4 0:00.79 reactor_1' 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106565 root 20 0 64.2g 46464 33280 R 66.7 0.4 0:00.79 reactor_1 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:37.209 17:26:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 106615 00:26:47.182 Initializing NVMe Controllers 00:26:47.182 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:47.182 Controller IO queue size 256, less than required. 00:26:47.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:47.182 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:47.182 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:47.182 Initialization complete. Launching workers. 00:26:47.182 ======================================================== 00:26:47.182 Latency(us) 00:26:47.182 Device Information : IOPS MiB/s Average min max 00:26:47.182 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 6239.34 24.37 41130.90 6211.31 94670.07 00:26:47.182 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 6175.34 24.12 41521.96 5598.22 84538.44 00:26:47.182 ======================================================== 00:26:47.182 Total : 12414.68 48.49 41325.42 5598.22 94670.07 00:26:47.182 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106555 0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106555 0 idle 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106555 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106555 root 20 0 64.2g 46464 33280 S 0.0 0.4 0:13.37 reactor_0' 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106555 root 20 0 64.2g 46464 33280 S 0.0 0.4 0:13.37 reactor_0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106555 1 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106555 1 idle 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106555 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106565 root 20 0 64.2g 46464 33280 S 0.0 0.4 0:06.58 reactor_1' 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106565 root 20 0 64.2g 46464 33280 S 0.0 0.4 0:06.58 reactor_1 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:47.182 17:26:28 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106555 0 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106555 0 idle 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106555 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106555 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:13.43 reactor_0' 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106555 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:13.43 reactor_0 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106555 1 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106555 1 idle 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106555 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:26:48.610 17:26:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106555 -w 256 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106565 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:06.60 reactor_1' 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106565 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:06.60 reactor_1 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:48.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.868 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.868 rmmod nvme_tcp 00:26:49.127 rmmod nvme_fabrics 00:26:49.127 rmmod nvme_keyring 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 106555 ']' 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 106555 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 106555 ']' 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 106555 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106555 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:49.127 killing process with pid 106555 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106555' 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 106555 00:26:49.127 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 106555 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:49.386 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:49.644 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:49.644 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.644 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:49.644 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.644 17:26:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:26:49.644 00:26:49.644 real 0m15.359s 00:26:49.644 user 0m27.781s 00:26:49.644 sys 0m7.366s 00:26:49.644 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.644 17:26:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:26:49.644 ************************************ 00:26:49.644 END TEST nvmf_interrupt 00:26:49.644 ************************************ 00:26:49.644 00:26:49.644 real 20m23.080s 00:26:49.644 user 54m6.607s 00:26:49.644 sys 4m53.310s 00:26:49.644 17:26:31 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.644 ************************************ 00:26:49.644 END TEST nvmf_tcp 00:26:49.644 17:26:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.644 ************************************ 00:26:49.644 17:26:31 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:26:49.644 17:26:31 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:49.644 17:26:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:49.644 17:26:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.644 17:26:31 -- common/autotest_common.sh@10 -- # set +x 00:26:49.644 ************************************ 00:26:49.644 START TEST spdkcli_nvmf_tcp 00:26:49.644 ************************************ 00:26:49.644 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:49.644 * Looking for test storage... 00:26:49.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:49.644 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:49.644 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:26:49.644 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:49.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.904 --rc genhtml_branch_coverage=1 00:26:49.904 --rc genhtml_function_coverage=1 00:26:49.904 --rc genhtml_legend=1 00:26:49.904 --rc geninfo_all_blocks=1 00:26:49.904 --rc geninfo_unexecuted_blocks=1 00:26:49.904 00:26:49.904 ' 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:49.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.904 --rc genhtml_branch_coverage=1 00:26:49.904 --rc genhtml_function_coverage=1 00:26:49.904 --rc genhtml_legend=1 00:26:49.904 --rc geninfo_all_blocks=1 00:26:49.904 --rc geninfo_unexecuted_blocks=1 00:26:49.904 00:26:49.904 ' 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:49.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.904 --rc genhtml_branch_coverage=1 00:26:49.904 --rc genhtml_function_coverage=1 00:26:49.904 --rc genhtml_legend=1 00:26:49.904 --rc geninfo_all_blocks=1 00:26:49.904 --rc geninfo_unexecuted_blocks=1 00:26:49.904 00:26:49.904 ' 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:49.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:49.904 --rc genhtml_branch_coverage=1 00:26:49.904 --rc genhtml_function_coverage=1 00:26:49.904 --rc genhtml_legend=1 00:26:49.904 --rc geninfo_all_blocks=1 00:26:49.904 --rc geninfo_unexecuted_blocks=1 00:26:49.904 00:26:49.904 ' 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:49.904 17:26:31 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:49.904 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=106945 00:26:49.904 17:26:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 106945 00:26:49.905 17:26:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:49.905 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 106945 ']' 00:26:49.905 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.905 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.905 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.905 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.905 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:49.905 [2024-12-06 17:26:32.085510] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:26:49.905 [2024-12-06 17:26:32.085623] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106945 ] 00:26:50.164 [2024-12-06 17:26:32.238228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:50.164 [2024-12-06 17:26:32.277299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.164 [2024-12-06 17:26:32.277307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:50.164 17:26:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:50.164 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:50.164 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:50.164 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:50.164 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:50.164 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:50.164 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:50.164 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:50.164 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:50.164 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:50.164 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:50.164 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:50.164 ' 00:26:53.449 [2024-12-06 17:26:35.217980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:54.381 [2024-12-06 17:26:36.547049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:56.913 [2024-12-06 17:26:39.012860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:59.446 [2024-12-06 17:26:41.138329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:00.822 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:00.822 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:00.822 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:00.822 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:00.822 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:00.822 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:00.822 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:00.822 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:00.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:00.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:00.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:00.822 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:00.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:00.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:00.822 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:00.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:00.822 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:00.823 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:00.823 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:00.823 17:26:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:00.823 17:26:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.823 17:26:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:00.823 17:26:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:00.823 17:26:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.823 17:26:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:00.823 17:26:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:00.823 17:26:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:01.389 17:26:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:01.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:01.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:01.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:01.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:01.389 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:01.389 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:01.389 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:01.389 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:01.389 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:01.389 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:01.389 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:01.389 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:01.389 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:01.389 ' 00:27:07.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:07.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:07.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:07.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:07.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:07.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:07.945 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:07.945 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:07.945 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:07.945 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:07.945 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:07.945 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:07.945 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:07.945 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:07.945 17:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:07.945 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:07.945 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.945 17:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 106945 00:27:07.945 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106945 ']' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106945 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106945 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.946 killing process with pid 106945 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106945' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 106945 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 106945 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 106945 ']' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 106945 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106945 ']' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106945 00:27:07.946 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (106945) - No such process 00:27:07.946 Process with pid 106945 is not found 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 106945 is not found' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:07.946 00:27:07.946 real 0m17.610s 00:27:07.946 user 0m38.693s 00:27:07.946 sys 0m0.789s 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:07.946 17:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:07.946 ************************************ 00:27:07.946 END TEST spdkcli_nvmf_tcp 00:27:07.946 ************************************ 00:27:07.946 17:26:49 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:07.946 17:26:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:07.946 17:26:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:07.946 17:26:49 -- common/autotest_common.sh@10 -- # set +x 00:27:07.946 ************************************ 00:27:07.946 START TEST nvmf_identify_passthru 00:27:07.946 ************************************ 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:07.946 * Looking for test storage... 00:27:07.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.946 --rc genhtml_branch_coverage=1 00:27:07.946 --rc genhtml_function_coverage=1 00:27:07.946 --rc genhtml_legend=1 00:27:07.946 --rc geninfo_all_blocks=1 00:27:07.946 --rc geninfo_unexecuted_blocks=1 00:27:07.946 00:27:07.946 ' 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.946 --rc genhtml_branch_coverage=1 00:27:07.946 --rc genhtml_function_coverage=1 00:27:07.946 --rc genhtml_legend=1 00:27:07.946 --rc geninfo_all_blocks=1 00:27:07.946 --rc geninfo_unexecuted_blocks=1 00:27:07.946 00:27:07.946 ' 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.946 --rc genhtml_branch_coverage=1 00:27:07.946 --rc genhtml_function_coverage=1 00:27:07.946 --rc genhtml_legend=1 00:27:07.946 --rc geninfo_all_blocks=1 00:27:07.946 --rc geninfo_unexecuted_blocks=1 00:27:07.946 00:27:07.946 ' 00:27:07.946 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:07.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:07.946 --rc genhtml_branch_coverage=1 00:27:07.946 --rc genhtml_function_coverage=1 00:27:07.946 --rc genhtml_legend=1 00:27:07.946 --rc geninfo_all_blocks=1 00:27:07.946 --rc geninfo_unexecuted_blocks=1 00:27:07.946 00:27:07.946 ' 00:27:07.946 17:26:49 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:07.946 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.946 17:26:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.946 17:26:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.946 17:26:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.946 17:26:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.947 17:26:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:07.947 17:26:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:07.947 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:07.947 17:26:49 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:07.947 17:26:49 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:27:07.947 17:26:49 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:07.947 17:26:49 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:07.947 17:26:49 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:07.947 17:26:49 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.947 17:26:49 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.947 17:26:49 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.947 17:26:49 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:07.947 17:26:49 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:07.947 17:26:49 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:07.947 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:07.947 17:26:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:07.947 Cannot find device "nvmf_init_br" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:07.947 Cannot find device "nvmf_init_br2" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:07.947 Cannot find device "nvmf_tgt_br" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:07.947 Cannot find device "nvmf_tgt_br2" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:07.947 Cannot find device "nvmf_init_br" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:07.947 Cannot find device "nvmf_init_br2" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:07.947 Cannot find device "nvmf_tgt_br" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:07.947 Cannot find device "nvmf_tgt_br2" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:07.947 Cannot find device "nvmf_br" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:07.947 Cannot find device "nvmf_init_if" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:07.947 Cannot find device "nvmf_init_if2" 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:07.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:07.947 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:07.947 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:07.948 17:26:49 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:07.948 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:07.948 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:27:07.948 00:27:07.948 --- 10.0.0.3 ping statistics --- 00:27:07.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.948 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:07.948 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:07.948 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:27:07.948 00:27:07.948 --- 10.0.0.4 ping statistics --- 00:27:07.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.948 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:07.948 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:07.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:27:07.948 00:27:07.948 --- 10.0.0.1 ping statistics --- 00:27:07.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.948 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:07.948 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:07.948 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:27:07.948 00:27:07.948 --- 10.0.0.2 ping statistics --- 00:27:07.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:07.948 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:07.948 17:26:50 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:07.948 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:07.948 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:07.948 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:27:07.948 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:27:07.948 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:27:07.948 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:07.948 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:07.948 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:08.206 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:27:08.206 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:27:08.206 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:08.206 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:08.465 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:27:08.465 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:08.465 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:08.465 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.465 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:08.466 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.466 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.466 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=107461 00:27:08.466 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:08.466 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:08.466 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 107461 00:27:08.466 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 107461 ']' 00:27:08.466 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.466 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.466 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.466 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.466 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.466 [2024-12-06 17:26:50.719370] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:27:08.466 [2024-12-06 17:26:50.719461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.724 [2024-12-06 17:26:50.865361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:08.724 [2024-12-06 17:26:50.899589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.724 [2024-12-06 17:26:50.899640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.724 [2024-12-06 17:26:50.899652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.724 [2024-12-06 17:26:50.899660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.724 [2024-12-06 17:26:50.899667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.724 [2024-12-06 17:26:50.900551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.724 [2024-12-06 17:26:50.900690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.724 [2024-12-06 17:26:50.900723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:08.724 [2024-12-06 17:26:50.900726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.724 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:08.724 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:27:08.724 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:08.724 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.724 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.724 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.724 17:26:50 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:08.724 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.724 17:26:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 [2024-12-06 17:26:51.031302] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 [2024-12-06 17:26:51.040678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 Nvme0n1 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 [2024-12-06 17:26:51.182481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:08.983 [ 00:27:08.983 { 00:27:08.983 "allow_any_host": true, 00:27:08.983 "hosts": [], 00:27:08.983 "listen_addresses": [], 00:27:08.983 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:08.983 "subtype": "Discovery" 00:27:08.983 }, 00:27:08.983 { 00:27:08.983 "allow_any_host": true, 00:27:08.983 "hosts": [], 00:27:08.983 "listen_addresses": [ 00:27:08.983 { 00:27:08.983 "adrfam": "IPv4", 00:27:08.983 "traddr": "10.0.0.3", 00:27:08.983 "trsvcid": "4420", 00:27:08.983 "trtype": "TCP" 00:27:08.983 } 00:27:08.983 ], 00:27:08.983 "max_cntlid": 65519, 00:27:08.983 "max_namespaces": 1, 00:27:08.983 "min_cntlid": 1, 00:27:08.983 "model_number": "SPDK bdev Controller", 00:27:08.983 "namespaces": [ 00:27:08.983 { 00:27:08.983 "bdev_name": "Nvme0n1", 00:27:08.983 "name": "Nvme0n1", 00:27:08.983 "nguid": "BA911170071840568B1B3CB8F23893F3", 00:27:08.983 "nsid": 1, 00:27:08.983 "uuid": "ba911170-0718-4056-8b1b-3cb8f23893f3" 00:27:08.983 } 00:27:08.983 ], 00:27:08.983 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:08.983 "serial_number": "SPDK00000000000001", 00:27:08.983 "subtype": "NVMe" 00:27:08.983 } 00:27:08.983 ] 00:27:08.983 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:08.983 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:09.242 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:27:09.242 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:09.242 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:09.242 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:09.500 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:27:09.500 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:27:09.500 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:27:09.500 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:09.500 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.500 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:09.500 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.500 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:09.500 17:26:51 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:09.500 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:09.500 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:09.761 rmmod nvme_tcp 00:27:09.761 rmmod nvme_fabrics 00:27:09.761 rmmod nvme_keyring 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 107461 ']' 00:27:09.761 17:26:51 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 107461 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 107461 ']' 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 107461 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107461 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:09.761 killing process with pid 107461 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107461' 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 107461 00:27:09.761 17:26:51 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 107461 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:09.761 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.020 17:26:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:10.020 17:26:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.020 17:26:52 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:27:10.020 00:27:10.020 real 0m2.805s 00:27:10.020 user 0m5.030s 00:27:10.020 sys 0m0.824s 00:27:10.020 ************************************ 00:27:10.020 17:26:52 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:10.020 17:26:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:10.020 END TEST nvmf_identify_passthru 00:27:10.020 ************************************ 00:27:10.279 17:26:52 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:10.279 17:26:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:10.279 17:26:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:10.279 17:26:52 -- common/autotest_common.sh@10 -- # set +x 00:27:10.279 ************************************ 00:27:10.279 START TEST nvmf_dif 00:27:10.279 ************************************ 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:10.279 * Looking for test storage... 00:27:10.279 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:10.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.279 --rc genhtml_branch_coverage=1 00:27:10.279 --rc genhtml_function_coverage=1 00:27:10.279 --rc genhtml_legend=1 00:27:10.279 --rc geninfo_all_blocks=1 00:27:10.279 --rc geninfo_unexecuted_blocks=1 00:27:10.279 00:27:10.279 ' 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:10.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.279 --rc genhtml_branch_coverage=1 00:27:10.279 --rc genhtml_function_coverage=1 00:27:10.279 --rc genhtml_legend=1 00:27:10.279 --rc geninfo_all_blocks=1 00:27:10.279 --rc geninfo_unexecuted_blocks=1 00:27:10.279 00:27:10.279 ' 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:10.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.279 --rc genhtml_branch_coverage=1 00:27:10.279 --rc genhtml_function_coverage=1 00:27:10.279 --rc genhtml_legend=1 00:27:10.279 --rc geninfo_all_blocks=1 00:27:10.279 --rc geninfo_unexecuted_blocks=1 00:27:10.279 00:27:10.279 ' 00:27:10.279 17:26:52 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:10.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:10.279 --rc genhtml_branch_coverage=1 00:27:10.279 --rc genhtml_function_coverage=1 00:27:10.279 --rc genhtml_legend=1 00:27:10.279 --rc geninfo_all_blocks=1 00:27:10.279 --rc geninfo_unexecuted_blocks=1 00:27:10.279 00:27:10.279 ' 00:27:10.279 17:26:52 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:10.279 17:26:52 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:10.279 17:26:52 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:10.279 17:26:52 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.279 17:26:52 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.279 17:26:52 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.279 17:26:52 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:10.280 17:26:52 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:10.280 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:10.280 17:26:52 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:10.280 17:26:52 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:10.280 17:26:52 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:10.280 17:26:52 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:10.280 17:26:52 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.280 17:26:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:10.280 17:26:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:10.280 Cannot find device "nvmf_init_br" 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:10.280 Cannot find device "nvmf_init_br2" 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:10.280 Cannot find device "nvmf_tgt_br" 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:10.280 17:26:52 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:10.625 Cannot find device "nvmf_tgt_br2" 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:10.626 Cannot find device "nvmf_init_br" 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:10.626 Cannot find device "nvmf_init_br2" 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:10.626 Cannot find device "nvmf_tgt_br" 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:10.626 Cannot find device "nvmf_tgt_br2" 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:10.626 Cannot find device "nvmf_br" 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:10.626 Cannot find device "nvmf_init_if" 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:10.626 Cannot find device "nvmf_init_if2" 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:10.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:10.626 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:10.626 17:26:52 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:10.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:10.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:27:10.913 00:27:10.913 --- 10.0.0.3 ping statistics --- 00:27:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.913 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:10.913 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:10.913 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:27:10.913 00:27:10.913 --- 10.0.0.4 ping statistics --- 00:27:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.913 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:10.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:27:10.913 00:27:10.913 --- 10.0.0.1 ping statistics --- 00:27:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.913 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:10.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:27:10.913 00:27:10.913 --- 10.0.0.2 ping statistics --- 00:27:10.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.913 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:10.913 17:26:52 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:11.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.172 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:11.172 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:11.172 17:26:53 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:11.172 17:26:53 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:11.172 17:26:53 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:11.172 17:26:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=107845 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 107845 00:27:11.172 17:26:53 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:11.172 17:26:53 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 107845 ']' 00:27:11.172 17:26:53 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.172 17:26:53 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:11.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.172 17:26:53 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.172 17:26:53 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:11.172 17:26:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:11.172 [2024-12-06 17:26:53.387084] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:27:11.173 [2024-12-06 17:26:53.387198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.432 [2024-12-06 17:26:53.538024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.432 [2024-12-06 17:26:53.569997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.432 [2024-12-06 17:26:53.570072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.432 [2024-12-06 17:26:53.570083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.432 [2024-12-06 17:26:53.570092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.432 [2024-12-06 17:26:53.570099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.432 [2024-12-06 17:26:53.570422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:27:11.432 17:26:53 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:11.432 17:26:53 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.432 17:26:53 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:11.432 17:26:53 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:11.432 [2024-12-06 17:26:53.698656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.432 17:26:53 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.432 17:26:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:11.432 ************************************ 00:27:11.432 START TEST fio_dif_1_default 00:27:11.432 ************************************ 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:11.432 bdev_null0 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.432 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:11.692 [2024-12-06 17:26:53.742816] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:11.692 { 00:27:11.692 "params": { 00:27:11.692 "name": "Nvme$subsystem", 00:27:11.692 "trtype": "$TEST_TRANSPORT", 00:27:11.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:11.692 "adrfam": "ipv4", 00:27:11.692 "trsvcid": "$NVMF_PORT", 00:27:11.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:11.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:11.692 "hdgst": ${hdgst:-false}, 00:27:11.692 "ddgst": ${ddgst:-false} 00:27:11.692 }, 00:27:11.692 "method": "bdev_nvme_attach_controller" 00:27:11.692 } 00:27:11.692 EOF 00:27:11.692 )") 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:11.692 "params": { 00:27:11.692 "name": "Nvme0", 00:27:11.692 "trtype": "tcp", 00:27:11.692 "traddr": "10.0.0.3", 00:27:11.692 "adrfam": "ipv4", 00:27:11.692 "trsvcid": "4420", 00:27:11.692 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:11.692 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:11.692 "hdgst": false, 00:27:11.692 "ddgst": false 00:27:11.692 }, 00:27:11.692 "method": "bdev_nvme_attach_controller" 00:27:11.692 }' 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:11.692 17:26:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:11.951 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:11.951 fio-3.35 00:27:11.951 Starting 1 thread 00:27:24.159 00:27:24.159 filename0: (groupid=0, jobs=1): err= 0: pid=107915: Fri Dec 6 17:27:04 2024 00:27:24.159 read: IOPS=267, BW=1072KiB/s (1097kB/s)(10.5MiB/10032msec) 00:27:24.159 slat (nsec): min=7654, max=65107, avg=10108.52, stdev=5552.19 00:27:24.159 clat (usec): min=453, max=42016, avg=14896.16, stdev=19405.41 00:27:24.159 lat (usec): min=461, max=42027, avg=14906.27, stdev=19405.72 00:27:24.159 clat percentiles (usec): 00:27:24.159 | 1.00th=[ 465], 5.00th=[ 474], 10.00th=[ 482], 20.00th=[ 490], 00:27:24.159 | 30.00th=[ 498], 40.00th=[ 510], 50.00th=[ 537], 60.00th=[ 644], 00:27:24.159 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:24.159 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:27:24.159 | 99.99th=[42206] 00:27:24.159 bw ( KiB/s): min= 704, max= 3008, per=100.00%, avg=1073.60, stdev=500.55, samples=20 00:27:24.159 iops : min= 176, max= 752, avg=268.40, stdev=125.14, samples=20 00:27:24.159 lat (usec) : 500=31.81%, 750=32.48%, 1000=0.15% 00:27:24.159 lat (msec) : 4=0.15%, 50=35.42% 00:27:24.159 cpu : usr=91.70%, sys=7.83%, ctx=27, majf=0, minf=9 00:27:24.159 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:24.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.160 issued rwts: total=2688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.160 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:24.160 00:27:24.160 Run status group 0 (all jobs): 00:27:24.160 READ: bw=1072KiB/s (1097kB/s), 1072KiB/s-1072KiB/s (1097kB/s-1097kB/s), io=10.5MiB (11.0MB), run=10032-10032msec 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 ************************************ 00:27:24.160 END TEST fio_dif_1_default 00:27:24.160 ************************************ 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 00:27:24.160 real 0m11.014s 00:27:24.160 user 0m9.874s 00:27:24.160 sys 0m1.028s 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 17:27:04 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:24.160 17:27:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:24.160 17:27:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 ************************************ 00:27:24.160 START TEST fio_dif_1_multi_subsystems 00:27:24.160 ************************************ 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 bdev_null0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 [2024-12-06 17:27:04.810080] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 bdev_null1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:24.160 { 00:27:24.160 "params": { 00:27:24.160 "name": "Nvme$subsystem", 00:27:24.160 "trtype": "$TEST_TRANSPORT", 00:27:24.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.160 "adrfam": "ipv4", 00:27:24.160 "trsvcid": "$NVMF_PORT", 00:27:24.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.160 "hdgst": ${hdgst:-false}, 00:27:24.160 "ddgst": ${ddgst:-false} 00:27:24.160 }, 00:27:24.160 "method": "bdev_nvme_attach_controller" 00:27:24.160 } 00:27:24.160 EOF 00:27:24.160 )") 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:24.160 { 00:27:24.160 "params": { 00:27:24.160 "name": "Nvme$subsystem", 00:27:24.160 "trtype": "$TEST_TRANSPORT", 00:27:24.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:24.160 "adrfam": "ipv4", 00:27:24.160 "trsvcid": "$NVMF_PORT", 00:27:24.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:24.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:24.160 "hdgst": ${hdgst:-false}, 00:27:24.160 "ddgst": ${ddgst:-false} 00:27:24.160 }, 00:27:24.160 "method": "bdev_nvme_attach_controller" 00:27:24.160 } 00:27:24.160 EOF 00:27:24.160 )") 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:24.160 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:24.161 "params": { 00:27:24.161 "name": "Nvme0", 00:27:24.161 "trtype": "tcp", 00:27:24.161 "traddr": "10.0.0.3", 00:27:24.161 "adrfam": "ipv4", 00:27:24.161 "trsvcid": "4420", 00:27:24.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:24.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:24.161 "hdgst": false, 00:27:24.161 "ddgst": false 00:27:24.161 }, 00:27:24.161 "method": "bdev_nvme_attach_controller" 00:27:24.161 },{ 00:27:24.161 "params": { 00:27:24.161 "name": "Nvme1", 00:27:24.161 "trtype": "tcp", 00:27:24.161 "traddr": "10.0.0.3", 00:27:24.161 "adrfam": "ipv4", 00:27:24.161 "trsvcid": "4420", 00:27:24.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:24.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:24.161 "hdgst": false, 00:27:24.161 "ddgst": false 00:27:24.161 }, 00:27:24.161 "method": "bdev_nvme_attach_controller" 00:27:24.161 }' 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:24.161 17:27:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:24.161 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:24.161 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:24.161 fio-3.35 00:27:24.161 Starting 2 threads 00:27:34.141 00:27:34.141 filename0: (groupid=0, jobs=1): err= 0: pid=108071: Fri Dec 6 17:27:15 2024 00:27:34.141 read: IOPS=137, BW=552KiB/s (565kB/s)(5520KiB/10005msec) 00:27:34.141 slat (nsec): min=7936, max=58787, avg=10846.86, stdev=5560.06 00:27:34.141 clat (usec): min=445, max=42896, avg=28962.87, stdev=18552.42 00:27:34.141 lat (usec): min=453, max=42921, avg=28973.72, stdev=18552.21 00:27:34.141 clat percentiles (usec): 00:27:34.141 | 1.00th=[ 469], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 519], 00:27:34.141 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:34.141 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:27:34.141 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:27:34.141 | 99.99th=[42730] 00:27:34.141 bw ( KiB/s): min= 416, max= 1056, per=50.26%, avg=549.05, stdev=145.97, samples=19 00:27:34.141 iops : min= 104, max= 264, avg=137.26, stdev=36.49, samples=19 00:27:34.141 lat (usec) : 500=11.67%, 750=17.03%, 1000=0.87% 00:27:34.141 lat (msec) : 2=0.29%, 50=70.14% 00:27:34.141 cpu : usr=95.39%, sys=4.19%, ctx=7, majf=0, minf=9 00:27:34.141 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:34.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.141 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.141 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:34.141 filename1: (groupid=0, jobs=1): err= 0: pid=108072: Fri Dec 6 17:27:15 2024 00:27:34.141 read: IOPS=135, BW=541KiB/s (554kB/s)(5408KiB/10005msec) 00:27:34.141 slat (nsec): min=7977, max=46557, avg=11877.19, stdev=7091.62 00:27:34.141 clat (usec): min=456, max=42883, avg=29559.20, stdev=18270.07 00:27:34.141 lat (usec): min=464, max=42909, avg=29571.08, stdev=18269.72 00:27:34.141 clat percentiles (usec): 00:27:34.141 | 1.00th=[ 469], 5.00th=[ 486], 10.00th=[ 498], 20.00th=[ 529], 00:27:34.141 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:34.141 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:27:34.141 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:27:34.141 | 99.99th=[42730] 00:27:34.141 bw ( KiB/s): min= 416, max= 800, per=49.81%, avg=544.00, stdev=92.38, samples=19 00:27:34.141 iops : min= 104, max= 200, avg=136.00, stdev=23.09, samples=19 00:27:34.141 lat (usec) : 500=11.17%, 750=13.98%, 1000=2.96% 00:27:34.141 lat (msec) : 4=0.30%, 50=71.60% 00:27:34.141 cpu : usr=95.72%, sys=3.85%, ctx=13, majf=0, minf=0 00:27:34.141 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:34.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.141 issued rwts: total=1352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.141 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:34.141 00:27:34.141 Run status group 0 (all jobs): 00:27:34.141 READ: bw=1092KiB/s (1118kB/s), 541KiB/s-552KiB/s (554kB/s-565kB/s), io=10.7MiB (11.2MB), run=10005-10005msec 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.141 ************************************ 00:27:34.141 END TEST fio_dif_1_multi_subsystems 00:27:34.141 ************************************ 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.141 00:27:34.141 real 0m11.130s 00:27:34.141 user 0m19.914s 00:27:34.141 sys 0m1.058s 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.141 17:27:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:34.141 17:27:15 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:34.141 17:27:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:34.141 17:27:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.141 17:27:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:34.141 ************************************ 00:27:34.141 START TEST fio_dif_rand_params 00:27:34.141 ************************************ 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:34.141 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:34.142 bdev_null0 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:34.142 [2024-12-06 17:27:15.993791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:34.142 17:27:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:34.142 { 00:27:34.142 "params": { 00:27:34.142 "name": "Nvme$subsystem", 00:27:34.142 "trtype": "$TEST_TRANSPORT", 00:27:34.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:34.142 "adrfam": "ipv4", 00:27:34.142 "trsvcid": "$NVMF_PORT", 00:27:34.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:34.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:34.142 "hdgst": ${hdgst:-false}, 00:27:34.142 "ddgst": ${ddgst:-false} 00:27:34.142 }, 00:27:34.142 "method": "bdev_nvme_attach_controller" 00:27:34.142 } 00:27:34.142 EOF 00:27:34.142 )") 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:34.142 "params": { 00:27:34.142 "name": "Nvme0", 00:27:34.142 "trtype": "tcp", 00:27:34.142 "traddr": "10.0.0.3", 00:27:34.142 "adrfam": "ipv4", 00:27:34.142 "trsvcid": "4420", 00:27:34.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:34.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:34.142 "hdgst": false, 00:27:34.142 "ddgst": false 00:27:34.142 }, 00:27:34.142 "method": "bdev_nvme_attach_controller" 00:27:34.142 }' 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:34.142 17:27:16 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:34.142 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:34.142 ... 00:27:34.142 fio-3.35 00:27:34.142 Starting 3 threads 00:27:39.476 00:27:39.476 filename0: (groupid=0, jobs=1): err= 0: pid=108225: Fri Dec 6 17:27:21 2024 00:27:39.476 read: IOPS=232, BW=29.1MiB/s (30.5MB/s)(146MiB/5005msec) 00:27:39.476 slat (nsec): min=5216, max=48095, avg=13309.00, stdev=3860.49 00:27:39.476 clat (usec): min=4377, max=55356, avg=12877.51, stdev=7612.27 00:27:39.476 lat (usec): min=4391, max=55369, avg=12890.82, stdev=7612.23 00:27:39.476 clat percentiles (usec): 00:27:39.476 | 1.00th=[ 5669], 5.00th=[ 7111], 10.00th=[ 7963], 20.00th=[ 8586], 00:27:39.476 | 30.00th=[10945], 40.00th=[11994], 50.00th=[12387], 60.00th=[12780], 00:27:39.476 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14222], 95.00th=[15139], 00:27:39.476 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:27:39.476 | 99.99th=[55313] 00:27:39.476 bw ( KiB/s): min=24832, max=36608, per=33.88%, avg=29721.60, stdev=3601.13, samples=10 00:27:39.476 iops : min= 194, max= 286, avg=232.20, stdev=28.13, samples=10 00:27:39.476 lat (msec) : 10=25.60%, 20=71.05%, 50=1.12%, 100=2.23% 00:27:39.476 cpu : usr=92.59%, sys=5.90%, ctx=5, majf=0, minf=0 00:27:39.476 IO depths : 1=5.1%, 2=94.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:39.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.476 issued rwts: total=1164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:39.476 filename0: (groupid=0, jobs=1): err= 0: pid=108226: Fri Dec 6 17:27:21 2024 00:27:39.476 read: IOPS=239, BW=30.0MiB/s (31.5MB/s)(150MiB/5001msec) 00:27:39.476 slat (nsec): min=4978, max=46615, avg=11579.97, stdev=4737.51 00:27:39.476 clat (usec): min=4445, max=56483, avg=12474.65, stdev=4893.30 00:27:39.476 lat (usec): min=4453, max=56508, avg=12486.23, stdev=4893.50 00:27:39.476 clat percentiles (usec): 00:27:39.476 | 1.00th=[ 4490], 5.00th=[ 4490], 10.00th=[ 4555], 20.00th=[ 9110], 00:27:39.476 | 30.00th=[ 9896], 40.00th=[11863], 50.00th=[14353], 60.00th=[14877], 00:27:39.476 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:27:39.476 | 99.00th=[17433], 99.50th=[18220], 99.90th=[56361], 99.95th=[56361], 00:27:39.476 | 99.99th=[56361] 00:27:39.476 bw ( KiB/s): min=23808, max=38400, per=35.60%, avg=31232.00, stdev=5511.44, samples=9 00:27:39.476 iops : min= 186, max= 300, avg=244.00, stdev=43.06, samples=9 00:27:39.476 lat (msec) : 10=30.33%, 20=69.17%, 50=0.25%, 100=0.25% 00:27:39.476 cpu : usr=92.76%, sys=5.68%, ctx=9, majf=0, minf=0 00:27:39.476 IO depths : 1=28.8%, 2=71.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:39.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.476 issued rwts: total=1200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:39.476 filename0: (groupid=0, jobs=1): err= 0: pid=108227: Fri Dec 6 17:27:21 2024 00:27:39.476 read: IOPS=213, BW=26.6MiB/s (27.9MB/s)(133MiB/5002msec) 00:27:39.476 slat (nsec): min=8077, max=43557, avg=14073.03, stdev=4297.30 00:27:39.476 clat (usec): min=2324, max=54575, avg=14053.71, stdev=10820.66 00:27:39.476 lat (usec): min=2337, max=54584, avg=14067.79, stdev=10820.86 00:27:39.476 clat percentiles (usec): 00:27:39.476 | 1.00th=[ 4555], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[10159], 00:27:39.476 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11469], 60.00th=[11731], 00:27:39.476 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12780], 95.00th=[51119], 00:27:39.476 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54264], 99.95th=[54789], 00:27:39.476 | 99.99th=[54789] 00:27:39.476 bw ( KiB/s): min=19200, max=34560, per=29.86%, avg=26197.33, stdev=5816.61, samples=9 00:27:39.476 iops : min= 150, max= 270, avg=204.67, stdev=45.44, samples=9 00:27:39.476 lat (msec) : 4=0.09%, 10=15.76%, 20=76.55%, 50=1.41%, 100=6.19% 00:27:39.476 cpu : usr=92.24%, sys=6.08%, ctx=14, majf=0, minf=0 00:27:39.476 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:39.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.476 issued rwts: total=1066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.476 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:39.476 00:27:39.476 Run status group 0 (all jobs): 00:27:39.476 READ: bw=85.7MiB/s (89.8MB/s), 26.6MiB/s-30.0MiB/s (27.9MB/s-31.5MB/s), io=429MiB (450MB), run=5001-5005msec 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:39.734 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 bdev_null0 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 [2024-12-06 17:27:21.966094] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 bdev_null1 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 bdev_null2 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.735 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:39.993 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:39.993 { 00:27:39.993 "params": { 00:27:39.993 "name": "Nvme$subsystem", 00:27:39.993 "trtype": "$TEST_TRANSPORT", 00:27:39.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.993 "adrfam": "ipv4", 00:27:39.993 "trsvcid": "$NVMF_PORT", 00:27:39.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.993 "hdgst": ${hdgst:-false}, 00:27:39.993 "ddgst": ${ddgst:-false} 00:27:39.993 }, 00:27:39.993 "method": "bdev_nvme_attach_controller" 00:27:39.993 } 00:27:39.993 EOF 00:27:39.994 )") 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:39.994 { 00:27:39.994 "params": { 00:27:39.994 "name": "Nvme$subsystem", 00:27:39.994 "trtype": "$TEST_TRANSPORT", 00:27:39.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.994 "adrfam": "ipv4", 00:27:39.994 "trsvcid": "$NVMF_PORT", 00:27:39.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.994 "hdgst": ${hdgst:-false}, 00:27:39.994 "ddgst": ${ddgst:-false} 00:27:39.994 }, 00:27:39.994 "method": "bdev_nvme_attach_controller" 00:27:39.994 } 00:27:39.994 EOF 00:27:39.994 )") 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:39.994 { 00:27:39.994 "params": { 00:27:39.994 "name": "Nvme$subsystem", 00:27:39.994 "trtype": "$TEST_TRANSPORT", 00:27:39.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:39.994 "adrfam": "ipv4", 00:27:39.994 "trsvcid": "$NVMF_PORT", 00:27:39.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:39.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:39.994 "hdgst": ${hdgst:-false}, 00:27:39.994 "ddgst": ${ddgst:-false} 00:27:39.994 }, 00:27:39.994 "method": "bdev_nvme_attach_controller" 00:27:39.994 } 00:27:39.994 EOF 00:27:39.994 )") 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:39.994 "params": { 00:27:39.994 "name": "Nvme0", 00:27:39.994 "trtype": "tcp", 00:27:39.994 "traddr": "10.0.0.3", 00:27:39.994 "adrfam": "ipv4", 00:27:39.994 "trsvcid": "4420", 00:27:39.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.994 "hdgst": false, 00:27:39.994 "ddgst": false 00:27:39.994 }, 00:27:39.994 "method": "bdev_nvme_attach_controller" 00:27:39.994 },{ 00:27:39.994 "params": { 00:27:39.994 "name": "Nvme1", 00:27:39.994 "trtype": "tcp", 00:27:39.994 "traddr": "10.0.0.3", 00:27:39.994 "adrfam": "ipv4", 00:27:39.994 "trsvcid": "4420", 00:27:39.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:39.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:39.994 "hdgst": false, 00:27:39.994 "ddgst": false 00:27:39.994 }, 00:27:39.994 "method": "bdev_nvme_attach_controller" 00:27:39.994 },{ 00:27:39.994 "params": { 00:27:39.994 "name": "Nvme2", 00:27:39.994 "trtype": "tcp", 00:27:39.994 "traddr": "10.0.0.3", 00:27:39.994 "adrfam": "ipv4", 00:27:39.994 "trsvcid": "4420", 00:27:39.994 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:39.994 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:39.994 "hdgst": false, 00:27:39.994 "ddgst": false 00:27:39.994 }, 00:27:39.994 "method": "bdev_nvme_attach_controller" 00:27:39.994 }' 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:39.994 17:27:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:39.994 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:39.994 ... 00:27:39.994 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:39.994 ... 00:27:39.994 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:39.994 ... 00:27:39.994 fio-3.35 00:27:39.994 Starting 24 threads 00:27:52.189 00:27:52.189 filename0: (groupid=0, jobs=1): err= 0: pid=108322: Fri Dec 6 17:27:33 2024 00:27:52.189 read: IOPS=209, BW=838KiB/s (858kB/s)(8388KiB/10008msec) 00:27:52.189 slat (usec): min=3, max=8043, avg=26.50, stdev=315.67 00:27:52.189 clat (msec): min=8, max=252, avg=76.19, stdev=27.38 00:27:52.189 lat (msec): min=8, max=252, avg=76.22, stdev=27.38 00:27:52.189 clat percentiles (msec): 00:27:52.189 | 1.00th=[ 14], 5.00th=[ 38], 10.00th=[ 48], 20.00th=[ 58], 00:27:52.189 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:27:52.189 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:52.189 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 253], 99.95th=[ 253], 00:27:52.189 | 99.99th=[ 253] 00:27:52.189 bw ( KiB/s): min= 640, max= 1580, per=4.40%, avg=837.26, stdev=212.67, samples=19 00:27:52.189 iops : min= 160, max= 395, avg=209.32, stdev=53.17, samples=19 00:27:52.189 lat (msec) : 10=0.76%, 20=2.19%, 50=10.92%, 100=71.20%, 250=14.50% 00:27:52.189 lat (msec) : 500=0.43% 00:27:52.189 cpu : usr=33.80%, sys=1.44%, ctx=872, majf=0, minf=9 00:27:52.189 IO depths : 1=1.3%, 2=3.1%, 4=10.5%, 8=73.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:52.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.189 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.189 issued rwts: total=2097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.189 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.190 filename0: (groupid=0, jobs=1): err= 0: pid=108323: Fri Dec 6 17:27:33 2024 00:27:52.190 read: IOPS=172, BW=690KiB/s (707kB/s)(6912KiB/10013msec) 00:27:52.190 slat (nsec): min=4782, max=33041, avg=11160.79, stdev=3920.79 00:27:52.190 clat (msec): min=38, max=274, avg=92.63, stdev=28.62 00:27:52.190 lat (msec): min=38, max=274, avg=92.64, stdev=28.62 00:27:52.190 clat percentiles (msec): 00:27:52.190 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 72], 00:27:52.190 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 85], 60.00th=[ 96], 00:27:52.190 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 157], 00:27:52.190 | 99.00th=[ 184], 99.50th=[ 253], 99.90th=[ 275], 99.95th=[ 275], 00:27:52.190 | 99.99th=[ 275] 00:27:52.190 bw ( KiB/s): min= 512, max= 896, per=3.61%, avg=686.79, stdev=120.25, samples=19 00:27:52.190 iops : min= 128, max= 224, avg=171.68, stdev=30.05, samples=19 00:27:52.190 lat (msec) : 50=3.41%, 100=66.67%, 250=29.40%, 500=0.52% 00:27:52.190 cpu : usr=32.08%, sys=1.07%, ctx=845, majf=0, minf=9 00:27:52.190 IO depths : 1=2.5%, 2=5.8%, 4=16.4%, 8=65.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:27:52.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 complete : 0=0.0%, 4=91.7%, 8=2.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.190 filename0: (groupid=0, jobs=1): err= 0: pid=108324: Fri Dec 6 17:27:33 2024 00:27:52.190 read: IOPS=192, BW=771KiB/s (789kB/s)(7752KiB/10055msec) 00:27:52.190 slat (usec): min=4, max=4034, avg=20.74, stdev=181.02 00:27:52.190 clat (msec): min=31, max=238, avg=82.73, stdev=29.43 00:27:52.190 lat (msec): min=31, max=238, avg=82.75, stdev=29.44 00:27:52.190 clat percentiles (msec): 00:27:52.190 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 59], 00:27:52.190 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:27:52.190 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 116], 95.00th=[ 132], 00:27:52.190 | 99.00th=[ 197], 99.50th=[ 239], 99.90th=[ 239], 99.95th=[ 239], 00:27:52.190 | 99.99th=[ 239] 00:27:52.190 bw ( KiB/s): min= 512, max= 992, per=4.04%, avg=769.90, stdev=148.94, samples=20 00:27:52.190 iops : min= 128, max= 248, avg=192.45, stdev=37.21, samples=20 00:27:52.190 lat (msec) : 50=9.65%, 100=67.96%, 250=22.39% 00:27:52.190 cpu : usr=41.56%, sys=1.17%, ctx=1321, majf=0, minf=9 00:27:52.190 IO depths : 1=2.1%, 2=4.6%, 4=13.2%, 8=68.8%, 16=11.2%, 32=0.0%, >=64=0.0% 00:27:52.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.190 filename0: (groupid=0, jobs=1): err= 0: pid=108325: Fri Dec 6 17:27:33 2024 00:27:52.190 read: IOPS=182, BW=731KiB/s (749kB/s)(7340KiB/10036msec) 00:27:52.190 slat (usec): min=4, max=8033, avg=31.55, stdev=374.00 00:27:52.190 clat (msec): min=32, max=200, avg=87.29, stdev=26.66 00:27:52.190 lat (msec): min=32, max=200, avg=87.33, stdev=26.66 00:27:52.190 clat percentiles (msec): 00:27:52.190 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 69], 00:27:52.190 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 86], 00:27:52.190 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 136], 00:27:52.190 | 99.00th=[ 184], 99.50th=[ 201], 99.90th=[ 201], 99.95th=[ 201], 00:27:52.190 | 99.99th=[ 201] 00:27:52.190 bw ( KiB/s): min= 512, max= 1032, per=3.82%, avg=727.65, stdev=141.86, samples=20 00:27:52.190 iops : min= 128, max= 258, avg=181.90, stdev=35.46, samples=20 00:27:52.190 lat (msec) : 50=5.18%, 100=68.88%, 250=25.94% 00:27:52.190 cpu : usr=36.86%, sys=1.13%, ctx=1117, majf=0, minf=9 00:27:52.190 IO depths : 1=2.2%, 2=4.9%, 4=13.4%, 8=68.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:27:52.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 complete : 0=0.0%, 4=90.9%, 8=4.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.190 filename0: (groupid=0, jobs=1): err= 0: pid=108326: Fri Dec 6 17:27:33 2024 00:27:52.190 read: IOPS=222, BW=889KiB/s (910kB/s)(8948KiB/10070msec) 00:27:52.190 slat (nsec): min=6027, max=60817, avg=11692.15, stdev=5503.14 00:27:52.190 clat (msec): min=14, max=224, avg=71.67, stdev=25.91 00:27:52.190 lat (msec): min=14, max=224, avg=71.68, stdev=25.91 00:27:52.190 clat percentiles (msec): 00:27:52.190 | 1.00th=[ 18], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 54], 00:27:52.190 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 73], 00:27:52.190 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 121], 00:27:52.190 | 99.00th=[ 159], 99.50th=[ 188], 99.90th=[ 224], 99.95th=[ 224], 00:27:52.190 | 99.99th=[ 224] 00:27:52.190 bw ( KiB/s): min= 592, max= 1274, per=4.68%, avg=891.70, stdev=194.08, samples=20 00:27:52.190 iops : min= 148, max= 318, avg=222.90, stdev=48.47, samples=20 00:27:52.190 lat (msec) : 20=1.03%, 50=15.69%, 100=73.98%, 250=9.30% 00:27:52.190 cpu : usr=41.58%, sys=1.25%, ctx=1219, majf=0, minf=9 00:27:52.190 IO depths : 1=0.8%, 2=1.8%, 4=8.4%, 8=76.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:27:52.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 issued rwts: total=2237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.190 filename0: (groupid=0, jobs=1): err= 0: pid=108327: Fri Dec 6 17:27:33 2024 00:27:52.190 read: IOPS=176, BW=705KiB/s (721kB/s)(7064KiB/10026msec) 00:27:52.190 slat (usec): min=3, max=4022, avg=17.27, stdev=120.16 00:27:52.190 clat (msec): min=48, max=253, avg=90.61, stdev=27.33 00:27:52.190 lat (msec): min=48, max=253, avg=90.63, stdev=27.34 00:27:52.190 clat percentiles (msec): 00:27:52.190 | 1.00th=[ 52], 5.00th=[ 58], 10.00th=[ 66], 20.00th=[ 72], 00:27:52.190 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 90], 00:27:52.190 | 70.00th=[ 100], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 136], 00:27:52.190 | 99.00th=[ 186], 99.50th=[ 232], 99.90th=[ 253], 99.95th=[ 253], 00:27:52.190 | 99.99th=[ 253] 00:27:52.190 bw ( KiB/s): min= 432, max= 896, per=3.69%, avg=701.75, stdev=130.19, samples=20 00:27:52.190 iops : min= 108, max= 224, avg=175.40, stdev=32.53, samples=20 00:27:52.190 lat (msec) : 50=0.34%, 100=70.72%, 250=28.54%, 500=0.40% 00:27:52.190 cpu : usr=44.78%, sys=1.25%, ctx=1268, majf=0, minf=9 00:27:52.190 IO depths : 1=3.6%, 2=8.0%, 4=19.4%, 8=59.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:27:52.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 complete : 0=0.0%, 4=92.6%, 8=1.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 issued rwts: total=1766,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.190 filename0: (groupid=0, jobs=1): err= 0: pid=108328: Fri Dec 6 17:27:33 2024 00:27:52.190 read: IOPS=174, BW=697KiB/s (714kB/s)(6988KiB/10019msec) 00:27:52.190 slat (usec): min=4, max=8032, avg=24.47, stdev=332.03 00:27:52.190 clat (msec): min=38, max=276, avg=91.57, stdev=28.76 00:27:52.190 lat (msec): min=38, max=276, avg=91.59, stdev=28.75 00:27:52.190 clat percentiles (msec): 00:27:52.190 | 1.00th=[ 48], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 72], 00:27:52.190 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 95], 00:27:52.190 | 70.00th=[ 99], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 136], 00:27:52.190 | 99.00th=[ 190], 99.50th=[ 203], 99.90th=[ 275], 99.95th=[ 275], 00:27:52.190 | 99.99th=[ 275] 00:27:52.190 bw ( KiB/s): min= 416, max= 896, per=3.64%, avg=692.30, stdev=144.12, samples=20 00:27:52.190 iops : min= 104, max= 224, avg=173.05, stdev=36.04, samples=20 00:27:52.190 lat (msec) : 50=2.29%, 100=68.86%, 250=28.45%, 500=0.40% 00:27:52.190 cpu : usr=32.03%, sys=0.93%, ctx=857, majf=0, minf=9 00:27:52.190 IO depths : 1=2.3%, 2=5.4%, 4=15.5%, 8=66.3%, 16=10.4%, 32=0.0%, >=64=0.0% 00:27:52.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 complete : 0=0.0%, 4=91.5%, 8=3.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.190 issued rwts: total=1747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.190 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.190 filename0: (groupid=0, jobs=1): err= 0: pid=108329: Fri Dec 6 17:27:33 2024 00:27:52.190 read: IOPS=190, BW=761KiB/s (779kB/s)(7640KiB/10045msec) 00:27:52.190 slat (usec): min=4, max=7044, avg=19.99, stdev=206.89 00:27:52.190 clat (msec): min=34, max=271, avg=83.99, stdev=28.87 00:27:52.190 lat (msec): min=34, max=272, avg=84.01, stdev=28.87 00:27:52.190 clat percentiles (msec): 00:27:52.190 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 62], 00:27:52.190 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 85], 00:27:52.190 | 70.00th=[ 92], 80.00th=[ 107], 90.00th=[ 116], 95.00th=[ 128], 00:27:52.190 | 99.00th=[ 178], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:27:52.190 | 99.99th=[ 271] 00:27:52.191 bw ( KiB/s): min= 512, max= 1072, per=3.98%, avg=757.05, stdev=138.41, samples=20 00:27:52.191 iops : min= 128, max= 268, avg=189.20, stdev=34.62, samples=20 00:27:52.191 lat (msec) : 50=7.75%, 100=68.12%, 250=23.61%, 500=0.52% 00:27:52.191 cpu : usr=42.03%, sys=1.41%, ctx=1447, majf=0, minf=9 00:27:52.191 IO depths : 1=3.2%, 2=7.4%, 4=18.3%, 8=61.7%, 16=9.3%, 32=0.0%, >=64=0.0% 00:27:52.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 issued rwts: total=1910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.191 filename1: (groupid=0, jobs=1): err= 0: pid=108330: Fri Dec 6 17:27:33 2024 00:27:52.191 read: IOPS=215, BW=860KiB/s (881kB/s)(8660KiB/10069msec) 00:27:52.191 slat (usec): min=4, max=8019, avg=14.78, stdev=172.17 00:27:52.191 clat (msec): min=16, max=225, avg=74.18, stdev=26.68 00:27:52.191 lat (msec): min=16, max=225, avg=74.19, stdev=26.68 00:27:52.191 clat percentiles (msec): 00:27:52.191 | 1.00th=[ 23], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:27:52.191 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 75], 00:27:52.191 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 121], 00:27:52.191 | 99.00th=[ 167], 99.50th=[ 192], 99.90th=[ 226], 99.95th=[ 226], 00:27:52.191 | 99.99th=[ 226] 00:27:52.191 bw ( KiB/s): min= 472, max= 1250, per=4.52%, avg=859.70, stdev=172.92, samples=20 00:27:52.191 iops : min= 118, max= 312, avg=214.90, stdev=43.17, samples=20 00:27:52.191 lat (msec) : 20=0.74%, 50=21.06%, 100=64.62%, 250=13.58% 00:27:52.191 cpu : usr=32.26%, sys=0.90%, ctx=858, majf=0, minf=9 00:27:52.191 IO depths : 1=0.8%, 2=1.7%, 4=7.4%, 8=77.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:27:52.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.191 filename1: (groupid=0, jobs=1): err= 0: pid=108331: Fri Dec 6 17:27:33 2024 00:27:52.191 read: IOPS=207, BW=831KiB/s (851kB/s)(8360KiB/10064msec) 00:27:52.191 slat (usec): min=5, max=4080, avg=16.79, stdev=144.18 00:27:52.191 clat (msec): min=28, max=236, avg=76.72, stdev=25.87 00:27:52.191 lat (msec): min=28, max=236, avg=76.74, stdev=25.88 00:27:52.191 clat percentiles (msec): 00:27:52.191 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 51], 20.00th=[ 56], 00:27:52.191 | 30.00th=[ 62], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 80], 00:27:52.191 | 70.00th=[ 85], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 117], 00:27:52.191 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 236], 99.95th=[ 236], 00:27:52.191 | 99.99th=[ 236] 00:27:52.191 bw ( KiB/s): min= 560, max= 1072, per=4.36%, avg=829.30, stdev=137.91, samples=20 00:27:52.191 iops : min= 140, max= 268, avg=207.30, stdev=34.47, samples=20 00:27:52.191 lat (msec) : 50=9.62%, 100=76.12%, 250=14.26% 00:27:52.191 cpu : usr=39.51%, sys=1.30%, ctx=1484, majf=0, minf=9 00:27:52.191 IO depths : 1=1.7%, 2=3.6%, 4=11.6%, 8=71.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:27:52.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.191 filename1: (groupid=0, jobs=1): err= 0: pid=108332: Fri Dec 6 17:27:33 2024 00:27:52.191 read: IOPS=182, BW=730KiB/s (747kB/s)(7332KiB/10045msec) 00:27:52.191 slat (usec): min=4, max=8020, avg=16.42, stdev=187.13 00:27:52.191 clat (msec): min=30, max=237, avg=87.44, stdev=27.26 00:27:52.191 lat (msec): min=30, max=237, avg=87.45, stdev=27.26 00:27:52.191 clat percentiles (msec): 00:27:52.191 | 1.00th=[ 40], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 71], 00:27:52.191 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 90], 00:27:52.191 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 116], 95.00th=[ 131], 00:27:52.191 | 99.00th=[ 192], 99.50th=[ 213], 99.90th=[ 239], 99.95th=[ 239], 00:27:52.191 | 99.99th=[ 239] 00:27:52.191 bw ( KiB/s): min= 512, max= 848, per=3.83%, avg=728.20, stdev=90.92, samples=20 00:27:52.191 iops : min= 128, max= 212, avg=182.00, stdev=22.73, samples=20 00:27:52.191 lat (msec) : 50=3.87%, 100=69.99%, 250=26.13% 00:27:52.191 cpu : usr=43.04%, sys=1.18%, ctx=1344, majf=0, minf=9 00:27:52.191 IO depths : 1=2.4%, 2=5.3%, 4=14.2%, 8=66.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:27:52.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 complete : 0=0.0%, 4=91.1%, 8=4.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 issued rwts: total=1833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.191 filename1: (groupid=0, jobs=1): err= 0: pid=108333: Fri Dec 6 17:27:33 2024 00:27:52.191 read: IOPS=170, BW=682KiB/s (698kB/s)(6836KiB/10027msec) 00:27:52.191 slat (usec): min=4, max=8024, avg=15.60, stdev=193.88 00:27:52.191 clat (msec): min=35, max=255, avg=93.75, stdev=31.11 00:27:52.191 lat (msec): min=35, max=255, avg=93.76, stdev=31.11 00:27:52.191 clat percentiles (msec): 00:27:52.191 | 1.00th=[ 38], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 72], 00:27:52.191 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 96], 00:27:52.191 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 123], 95.00th=[ 144], 00:27:52.191 | 99.00th=[ 209], 99.50th=[ 255], 99.90th=[ 255], 99.95th=[ 255], 00:27:52.191 | 99.99th=[ 255] 00:27:52.191 bw ( KiB/s): min= 384, max= 920, per=3.55%, avg=676.60, stdev=129.16, samples=20 00:27:52.191 iops : min= 96, max= 230, avg=169.15, stdev=32.29, samples=20 00:27:52.191 lat (msec) : 50=4.45%, 100=64.48%, 250=30.08%, 500=0.99% 00:27:52.191 cpu : usr=31.70%, sys=1.24%, ctx=851, majf=0, minf=9 00:27:52.191 IO depths : 1=2.6%, 2=6.0%, 4=15.9%, 8=65.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:27:52.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 issued rwts: total=1709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.191 filename1: (groupid=0, jobs=1): err= 0: pid=108334: Fri Dec 6 17:27:33 2024 00:27:52.191 read: IOPS=195, BW=784KiB/s (802kB/s)(7880KiB/10057msec) 00:27:52.191 slat (usec): min=6, max=8051, avg=21.29, stdev=255.82 00:27:52.191 clat (msec): min=23, max=215, avg=81.37, stdev=27.87 00:27:52.191 lat (msec): min=23, max=215, avg=81.39, stdev=27.87 00:27:52.191 clat percentiles (msec): 00:27:52.191 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 61], 00:27:52.191 | 30.00th=[ 69], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:27:52.191 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 118], 95.00th=[ 126], 00:27:52.191 | 99.00th=[ 192], 99.50th=[ 205], 99.90th=[ 215], 99.95th=[ 215], 00:27:52.191 | 99.99th=[ 215] 00:27:52.191 bw ( KiB/s): min= 472, max= 1015, per=4.12%, avg=783.50, stdev=143.08, samples=20 00:27:52.191 iops : min= 118, max= 253, avg=195.80, stdev=35.72, samples=20 00:27:52.191 lat (msec) : 50=8.38%, 100=72.79%, 250=18.83% 00:27:52.191 cpu : usr=34.43%, sys=1.01%, ctx=917, majf=0, minf=9 00:27:52.191 IO depths : 1=2.3%, 2=5.1%, 4=14.1%, 8=68.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:27:52.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 complete : 0=0.0%, 4=91.0%, 8=3.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 issued rwts: total=1970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.191 filename1: (groupid=0, jobs=1): err= 0: pid=108335: Fri Dec 6 17:27:33 2024 00:27:52.191 read: IOPS=230, BW=922KiB/s (944kB/s)(9268KiB/10049msec) 00:27:52.191 slat (nsec): min=3835, max=94338, avg=12534.05, stdev=7198.22 00:27:52.191 clat (usec): min=1614, max=252108, avg=69145.44, stdev=31389.92 00:27:52.191 lat (usec): min=1622, max=252122, avg=69157.98, stdev=31390.10 00:27:52.191 clat percentiles (usec): 00:27:52.191 | 1.00th=[ 1680], 5.00th=[ 10159], 10.00th=[ 39584], 20.00th=[ 47973], 00:27:52.191 | 30.00th=[ 56886], 40.00th=[ 60031], 50.00th=[ 70779], 60.00th=[ 72877], 00:27:52.191 | 70.00th=[ 82314], 80.00th=[ 86508], 90.00th=[ 98042], 95.00th=[110625], 00:27:52.191 | 99.00th=[181404], 99.50th=[238027], 99.90th=[252707], 99.95th=[252707], 00:27:52.191 | 99.99th=[252707] 00:27:52.191 bw ( KiB/s): min= 528, max= 2360, per=4.84%, avg=920.30, stdev=375.84, samples=20 00:27:52.191 iops : min= 132, max= 590, avg=230.05, stdev=93.97, samples=20 00:27:52.191 lat (msec) : 2=2.46%, 4=0.99%, 10=1.38%, 20=2.59%, 50=16.57% 00:27:52.191 lat (msec) : 100=66.25%, 250=9.58%, 500=0.17% 00:27:52.191 cpu : usr=34.73%, sys=0.99%, ctx=965, majf=0, minf=9 00:27:52.191 IO depths : 1=0.9%, 2=2.4%, 4=10.7%, 8=73.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:27:52.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 complete : 0=0.0%, 4=90.3%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.191 issued rwts: total=2317,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.191 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.191 filename1: (groupid=0, jobs=1): err= 0: pid=108336: Fri Dec 6 17:27:33 2024 00:27:52.191 read: IOPS=176, BW=705KiB/s (722kB/s)(7072KiB/10031msec) 00:27:52.191 slat (usec): min=4, max=8050, avg=26.69, stdev=330.53 00:27:52.191 clat (msec): min=32, max=251, avg=90.55, stdev=27.38 00:27:52.191 lat (msec): min=32, max=251, avg=90.58, stdev=27.38 00:27:52.191 clat percentiles (msec): 00:27:52.191 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 72], 00:27:52.191 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 94], 00:27:52.191 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 140], 00:27:52.191 | 99.00th=[ 192], 99.50th=[ 215], 99.90th=[ 215], 99.95th=[ 253], 00:27:52.191 | 99.99th=[ 253] 00:27:52.191 bw ( KiB/s): min= 480, max= 832, per=3.69%, avg=702.10, stdev=105.55, samples=20 00:27:52.192 iops : min= 120, max= 208, avg=175.50, stdev=26.37, samples=20 00:27:52.192 lat (msec) : 50=3.68%, 100=65.72%, 250=30.54%, 500=0.06% 00:27:52.192 cpu : usr=32.08%, sys=0.90%, ctx=859, majf=0, minf=9 00:27:52.192 IO depths : 1=1.6%, 2=4.3%, 4=12.8%, 8=68.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 complete : 0=0.0%, 4=91.2%, 8=4.9%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 issued rwts: total=1768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.192 filename1: (groupid=0, jobs=1): err= 0: pid=108337: Fri Dec 6 17:27:33 2024 00:27:52.192 read: IOPS=184, BW=739KiB/s (757kB/s)(7408KiB/10020msec) 00:27:52.192 slat (usec): min=4, max=9054, avg=27.46, stdev=310.33 00:27:52.192 clat (msec): min=32, max=256, avg=86.25, stdev=27.94 00:27:52.192 lat (msec): min=32, max=256, avg=86.28, stdev=27.95 00:27:52.192 clat percentiles (msec): 00:27:52.192 | 1.00th=[ 47], 5.00th=[ 55], 10.00th=[ 58], 20.00th=[ 65], 00:27:52.192 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 88], 00:27:52.192 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 125], 00:27:52.192 | 99.00th=[ 207], 99.50th=[ 232], 99.90th=[ 257], 99.95th=[ 257], 00:27:52.192 | 99.99th=[ 257] 00:27:52.192 bw ( KiB/s): min= 512, max= 944, per=3.88%, avg=738.20, stdev=132.89, samples=20 00:27:52.192 iops : min= 128, max= 236, avg=184.55, stdev=33.22, samples=20 00:27:52.192 lat (msec) : 50=3.78%, 100=73.33%, 250=22.57%, 500=0.32% 00:27:52.192 cpu : usr=41.02%, sys=0.97%, ctx=1219, majf=0, minf=9 00:27:52.192 IO depths : 1=2.4%, 2=5.7%, 4=17.0%, 8=64.5%, 16=10.5%, 32=0.0%, >=64=0.0% 00:27:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 issued rwts: total=1852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.192 filename2: (groupid=0, jobs=1): err= 0: pid=108338: Fri Dec 6 17:27:33 2024 00:27:52.192 read: IOPS=218, BW=873KiB/s (894kB/s)(8792KiB/10066msec) 00:27:52.192 slat (usec): min=4, max=8030, avg=14.29, stdev=171.12 00:27:52.192 clat (msec): min=13, max=239, avg=73.03, stdev=26.23 00:27:52.192 lat (msec): min=13, max=239, avg=73.05, stdev=26.24 00:27:52.192 clat percentiles (msec): 00:27:52.192 | 1.00th=[ 21], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 50], 00:27:52.192 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 72], 00:27:52.192 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 121], 00:27:52.192 | 99.00th=[ 146], 99.50th=[ 176], 99.90th=[ 241], 99.95th=[ 241], 00:27:52.192 | 99.99th=[ 241] 00:27:52.192 bw ( KiB/s): min= 568, max= 1200, per=4.58%, avg=872.80, stdev=187.94, samples=20 00:27:52.192 iops : min= 142, max= 300, avg=218.20, stdev=46.98, samples=20 00:27:52.192 lat (msec) : 20=0.96%, 50=20.25%, 100=64.97%, 250=13.83% 00:27:52.192 cpu : usr=31.92%, sys=1.10%, ctx=859, majf=0, minf=10 00:27:52.192 IO depths : 1=0.9%, 2=2.1%, 4=8.6%, 8=75.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:27:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 complete : 0=0.0%, 4=89.6%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 issued rwts: total=2198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.192 filename2: (groupid=0, jobs=1): err= 0: pid=108339: Fri Dec 6 17:27:33 2024 00:27:52.192 read: IOPS=205, BW=822KiB/s (841kB/s)(8244KiB/10034msec) 00:27:52.192 slat (usec): min=4, max=4022, avg=17.69, stdev=125.08 00:27:52.192 clat (msec): min=27, max=232, avg=77.77, stdev=24.71 00:27:52.192 lat (msec): min=27, max=232, avg=77.79, stdev=24.71 00:27:52.192 clat percentiles (msec): 00:27:52.192 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:27:52.192 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 82], 00:27:52.192 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 118], 00:27:52.192 | 99.00th=[ 136], 99.50th=[ 190], 99.90th=[ 232], 99.95th=[ 232], 00:27:52.192 | 99.99th=[ 232] 00:27:52.192 bw ( KiB/s): min= 621, max= 1080, per=4.30%, avg=817.05, stdev=137.72, samples=20 00:27:52.192 iops : min= 155, max= 270, avg=204.25, stdev=34.45, samples=20 00:27:52.192 lat (msec) : 50=12.13%, 100=71.13%, 250=16.74% 00:27:52.192 cpu : usr=43.37%, sys=1.37%, ctx=1158, majf=0, minf=9 00:27:52.192 IO depths : 1=2.0%, 2=4.5%, 4=12.0%, 8=70.2%, 16=11.3%, 32=0.0%, >=64=0.0% 00:27:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 complete : 0=0.0%, 4=90.8%, 8=4.4%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.192 filename2: (groupid=0, jobs=1): err= 0: pid=108340: Fri Dec 6 17:27:33 2024 00:27:52.192 read: IOPS=237, BW=950KiB/s (973kB/s)(9532KiB/10034msec) 00:27:52.192 slat (usec): min=4, max=8025, avg=18.21, stdev=232.14 00:27:52.192 clat (msec): min=16, max=227, avg=67.21, stdev=25.48 00:27:52.192 lat (msec): min=16, max=227, avg=67.23, stdev=25.48 00:27:52.192 clat percentiles (msec): 00:27:52.192 | 1.00th=[ 26], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:27:52.192 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 62], 60.00th=[ 69], 00:27:52.192 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:27:52.192 | 99.00th=[ 157], 99.50th=[ 192], 99.90th=[ 228], 99.95th=[ 228], 00:27:52.192 | 99.99th=[ 228] 00:27:52.192 bw ( KiB/s): min= 432, max= 1200, per=4.99%, avg=949.00, stdev=210.01, samples=20 00:27:52.192 iops : min= 108, max= 300, avg=237.20, stdev=52.49, samples=20 00:27:52.192 lat (msec) : 20=0.67%, 50=24.13%, 100=67.81%, 250=7.39% 00:27:52.192 cpu : usr=40.70%, sys=1.34%, ctx=1182, majf=0, minf=9 00:27:52.192 IO depths : 1=0.5%, 2=1.0%, 4=6.3%, 8=79.0%, 16=13.1%, 32=0.0%, >=64=0.0% 00:27:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 complete : 0=0.0%, 4=89.1%, 8=6.5%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 issued rwts: total=2383,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.192 filename2: (groupid=0, jobs=1): err= 0: pid=108341: Fri Dec 6 17:27:33 2024 00:27:52.192 read: IOPS=220, BW=884KiB/s (905kB/s)(8844KiB/10007msec) 00:27:52.192 slat (usec): min=4, max=4026, avg=20.35, stdev=190.65 00:27:52.192 clat (msec): min=15, max=237, avg=72.23, stdev=26.11 00:27:52.192 lat (msec): min=15, max=237, avg=72.25, stdev=26.12 00:27:52.192 clat percentiles (msec): 00:27:52.192 | 1.00th=[ 18], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 53], 00:27:52.192 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 75], 00:27:52.192 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 109], 00:27:52.192 | 99.00th=[ 142], 99.50th=[ 236], 99.90th=[ 239], 99.95th=[ 239], 00:27:52.192 | 99.99th=[ 239] 00:27:52.192 bw ( KiB/s): min= 608, max= 1373, per=4.68%, avg=890.37, stdev=202.43, samples=19 00:27:52.192 iops : min= 152, max= 343, avg=222.58, stdev=50.57, samples=19 00:27:52.192 lat (msec) : 20=1.45%, 50=14.88%, 100=71.01%, 250=12.66% 00:27:52.192 cpu : usr=42.78%, sys=1.24%, ctx=1379, majf=0, minf=9 00:27:52.192 IO depths : 1=2.5%, 2=6.3%, 4=17.0%, 8=63.8%, 16=10.4%, 32=0.0%, >=64=0.0% 00:27:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 complete : 0=0.0%, 4=92.0%, 8=2.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 issued rwts: total=2211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.192 filename2: (groupid=0, jobs=1): err= 0: pid=108342: Fri Dec 6 17:27:33 2024 00:27:52.192 read: IOPS=228, BW=915KiB/s (937kB/s)(9220KiB/10072msec) 00:27:52.192 slat (usec): min=5, max=10103, avg=18.15, stdev=210.37 00:27:52.192 clat (msec): min=13, max=235, avg=69.66, stdev=25.57 00:27:52.192 lat (msec): min=13, max=235, avg=69.68, stdev=25.57 00:27:52.192 clat percentiles (msec): 00:27:52.192 | 1.00th=[ 20], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 50], 00:27:52.192 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 67], 60.00th=[ 72], 00:27:52.192 | 70.00th=[ 81], 80.00th=[ 87], 90.00th=[ 102], 95.00th=[ 118], 00:27:52.192 | 99.00th=[ 136], 99.50th=[ 178], 99.90th=[ 236], 99.95th=[ 236], 00:27:52.192 | 99.99th=[ 236] 00:27:52.192 bw ( KiB/s): min= 592, max= 1536, per=4.81%, avg=915.60, stdev=213.51, samples=20 00:27:52.192 iops : min= 148, max= 384, avg=228.90, stdev=53.38, samples=20 00:27:52.192 lat (msec) : 20=1.39%, 50=19.31%, 100=68.85%, 250=10.46% 00:27:52.192 cpu : usr=43.22%, sys=1.13%, ctx=1460, majf=0, minf=9 00:27:52.192 IO depths : 1=0.4%, 2=1.0%, 4=6.0%, 8=78.8%, 16=13.7%, 32=0.0%, >=64=0.0% 00:27:52.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 complete : 0=0.0%, 4=89.3%, 8=6.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.192 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.192 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.192 filename2: (groupid=0, jobs=1): err= 0: pid=108343: Fri Dec 6 17:27:33 2024 00:27:52.192 read: IOPS=182, BW=729KiB/s (746kB/s)(7312KiB/10037msec) 00:27:52.192 slat (usec): min=4, max=4063, avg=14.59, stdev=94.98 00:27:52.192 clat (msec): min=33, max=253, avg=87.70, stdev=30.85 00:27:52.192 lat (msec): min=33, max=253, avg=87.72, stdev=30.85 00:27:52.192 clat percentiles (msec): 00:27:52.192 | 1.00th=[ 44], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 65], 00:27:52.192 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 84], 60.00th=[ 89], 00:27:52.192 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 123], 95.00th=[ 136], 00:27:52.192 | 99.00th=[ 203], 99.50th=[ 255], 99.90th=[ 255], 99.95th=[ 255], 00:27:52.192 | 99.99th=[ 255] 00:27:52.192 bw ( KiB/s): min= 384, max= 1072, per=3.81%, avg=724.80, stdev=172.02, samples=20 00:27:52.192 iops : min= 96, max= 268, avg=181.20, stdev=43.01, samples=20 00:27:52.192 lat (msec) : 50=4.38%, 100=68.60%, 250=26.15%, 500=0.88% 00:27:52.192 cpu : usr=38.74%, sys=0.93%, ctx=1254, majf=0, minf=9 00:27:52.193 IO depths : 1=2.8%, 2=5.7%, 4=15.2%, 8=66.0%, 16=10.3%, 32=0.0%, >=64=0.0% 00:27:52.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.193 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.193 issued rwts: total=1828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.193 filename2: (groupid=0, jobs=1): err= 0: pid=108344: Fri Dec 6 17:27:33 2024 00:27:52.193 read: IOPS=215, BW=862KiB/s (883kB/s)(8688KiB/10075msec) 00:27:52.193 slat (usec): min=5, max=8022, avg=15.03, stdev=171.96 00:27:52.193 clat (msec): min=15, max=251, avg=74.01, stdev=28.60 00:27:52.193 lat (msec): min=15, max=251, avg=74.03, stdev=28.60 00:27:52.193 clat percentiles (msec): 00:27:52.193 | 1.00th=[ 18], 5.00th=[ 35], 10.00th=[ 48], 20.00th=[ 57], 00:27:52.193 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 74], 00:27:52.193 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 121], 00:27:52.193 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 253], 99.95th=[ 253], 00:27:52.193 | 99.99th=[ 253] 00:27:52.193 bw ( KiB/s): min= 656, max= 1482, per=4.53%, avg=862.10, stdev=184.96, samples=20 00:27:52.193 iops : min= 164, max= 370, avg=215.50, stdev=46.15, samples=20 00:27:52.193 lat (msec) : 20=2.76%, 50=15.33%, 100=68.09%, 250=13.54%, 500=0.28% 00:27:52.193 cpu : usr=32.34%, sys=0.84%, ctx=853, majf=0, minf=9 00:27:52.193 IO depths : 1=0.4%, 2=1.2%, 4=7.3%, 8=77.4%, 16=13.6%, 32=0.0%, >=64=0.0% 00:27:52.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.193 complete : 0=0.0%, 4=89.6%, 8=6.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.193 issued rwts: total=2172,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.193 filename2: (groupid=0, jobs=1): err= 0: pid=108345: Fri Dec 6 17:27:33 2024 00:27:52.193 read: IOPS=178, BW=713KiB/s (730kB/s)(7140KiB/10019msec) 00:27:52.193 slat (usec): min=3, max=8034, avg=24.12, stdev=254.46 00:27:52.193 clat (msec): min=38, max=186, avg=89.56, stdev=26.07 00:27:52.193 lat (msec): min=38, max=186, avg=89.58, stdev=26.07 00:27:52.193 clat percentiles (msec): 00:27:52.193 | 1.00th=[ 48], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 70], 00:27:52.193 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 96], 00:27:52.193 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 132], 00:27:52.193 | 99.00th=[ 182], 99.50th=[ 186], 99.90th=[ 186], 99.95th=[ 186], 00:27:52.193 | 99.99th=[ 186] 00:27:52.193 bw ( KiB/s): min= 512, max= 896, per=3.72%, avg=707.60, stdev=123.89, samples=20 00:27:52.193 iops : min= 128, max= 224, avg=176.90, stdev=30.97, samples=20 00:27:52.193 lat (msec) : 50=2.75%, 100=66.89%, 250=30.36% 00:27:52.193 cpu : usr=36.32%, sys=1.02%, ctx=1033, majf=0, minf=10 00:27:52.193 IO depths : 1=2.5%, 2=5.6%, 4=15.5%, 8=65.8%, 16=10.6%, 32=0.0%, >=64=0.0% 00:27:52.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.193 complete : 0=0.0%, 4=91.3%, 8=3.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:52.193 issued rwts: total=1785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:52.193 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:52.193 00:27:52.193 Run status group 0 (all jobs): 00:27:52.193 READ: bw=18.6MiB/s (19.5MB/s), 682KiB/s-950KiB/s (698kB/s-973kB/s), io=187MiB (196MB), run=10007-10075msec 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 bdev_null0 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 [2024-12-06 17:27:33.382855] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 bdev_null1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:52.193 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:52.194 { 00:27:52.194 "params": { 00:27:52.194 "name": "Nvme$subsystem", 00:27:52.194 "trtype": "$TEST_TRANSPORT", 00:27:52.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.194 "adrfam": "ipv4", 00:27:52.194 "trsvcid": "$NVMF_PORT", 00:27:52.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.194 "hdgst": ${hdgst:-false}, 00:27:52.194 "ddgst": ${ddgst:-false} 00:27:52.194 }, 00:27:52.194 "method": "bdev_nvme_attach_controller" 00:27:52.194 } 00:27:52.194 EOF 00:27:52.194 )") 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:52.194 { 00:27:52.194 "params": { 00:27:52.194 "name": "Nvme$subsystem", 00:27:52.194 "trtype": "$TEST_TRANSPORT", 00:27:52.194 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.194 "adrfam": "ipv4", 00:27:52.194 "trsvcid": "$NVMF_PORT", 00:27:52.194 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.194 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.194 "hdgst": ${hdgst:-false}, 00:27:52.194 "ddgst": ${ddgst:-false} 00:27:52.194 }, 00:27:52.194 "method": "bdev_nvme_attach_controller" 00:27:52.194 } 00:27:52.194 EOF 00:27:52.194 )") 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:52.194 "params": { 00:27:52.194 "name": "Nvme0", 00:27:52.194 "trtype": "tcp", 00:27:52.194 "traddr": "10.0.0.3", 00:27:52.194 "adrfam": "ipv4", 00:27:52.194 "trsvcid": "4420", 00:27:52.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:52.194 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:52.194 "hdgst": false, 00:27:52.194 "ddgst": false 00:27:52.194 }, 00:27:52.194 "method": "bdev_nvme_attach_controller" 00:27:52.194 },{ 00:27:52.194 "params": { 00:27:52.194 "name": "Nvme1", 00:27:52.194 "trtype": "tcp", 00:27:52.194 "traddr": "10.0.0.3", 00:27:52.194 "adrfam": "ipv4", 00:27:52.194 "trsvcid": "4420", 00:27:52.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:52.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:52.194 "hdgst": false, 00:27:52.194 "ddgst": false 00:27:52.194 }, 00:27:52.194 "method": "bdev_nvme_attach_controller" 00:27:52.194 }' 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:52.194 17:27:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:52.194 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:52.194 ... 00:27:52.194 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:52.194 ... 00:27:52.194 fio-3.35 00:27:52.194 Starting 4 threads 00:27:57.489 00:27:57.489 filename0: (groupid=0, jobs=1): err= 0: pid=108472: Fri Dec 6 17:27:39 2024 00:27:57.489 read: IOPS=1832, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5002msec) 00:27:57.489 slat (nsec): min=4000, max=43361, avg=15182.01, stdev=4232.38 00:27:57.489 clat (usec): min=3090, max=17862, avg=4291.01, stdev=1093.86 00:27:57.489 lat (usec): min=3109, max=17897, avg=4306.19, stdev=1093.38 00:27:57.489 clat percentiles (usec): 00:27:57.489 | 1.00th=[ 3982], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4047], 00:27:57.489 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:27:57.489 | 70.00th=[ 4146], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 5080], 00:27:57.489 | 99.00th=[ 5997], 99.50th=[16712], 99.90th=[17433], 99.95th=[17957], 00:27:57.489 | 99.99th=[17957] 00:27:57.489 bw ( KiB/s): min=12160, max=15488, per=24.90%, avg=14620.44, stdev=1232.36, samples=9 00:27:57.489 iops : min= 1520, max= 1936, avg=1827.56, stdev=154.04, samples=9 00:27:57.489 lat (msec) : 4=2.40%, 10=96.89%, 20=0.71% 00:27:57.489 cpu : usr=93.96%, sys=4.78%, ctx=3, majf=0, minf=0 00:27:57.489 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.489 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.489 issued rwts: total=9168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.489 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.489 filename0: (groupid=0, jobs=1): err= 0: pid=108473: Fri Dec 6 17:27:39 2024 00:27:57.489 read: IOPS=1832, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5002msec) 00:27:57.489 slat (nsec): min=8007, max=43547, avg=15051.78, stdev=4400.53 00:27:57.489 clat (usec): min=1633, max=17870, avg=4287.39, stdev=1097.28 00:27:57.489 lat (usec): min=1650, max=17891, avg=4302.44, stdev=1096.89 00:27:57.489 clat percentiles (usec): 00:27:57.489 | 1.00th=[ 3982], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4047], 00:27:57.489 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:27:57.489 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4359], 95.00th=[ 5014], 00:27:57.489 | 99.00th=[ 5997], 99.50th=[16712], 99.90th=[17433], 99.95th=[17957], 00:27:57.489 | 99.99th=[17957] 00:27:57.489 bw ( KiB/s): min=12160, max=15488, per=24.91%, avg=14623.78, stdev=1234.26, samples=9 00:27:57.489 iops : min= 1520, max= 1936, avg=1827.89, stdev=154.23, samples=9 00:27:57.489 lat (msec) : 2=0.01%, 4=2.24%, 10=97.04%, 20=0.71% 00:27:57.489 cpu : usr=93.84%, sys=4.90%, ctx=7, majf=0, minf=10 00:27:57.489 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.489 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.489 issued rwts: total=9168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.489 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.489 filename1: (groupid=0, jobs=1): err= 0: pid=108474: Fri Dec 6 17:27:39 2024 00:27:57.489 read: IOPS=1832, BW=14.3MiB/s (15.0MB/s)(71.6MiB/5003msec) 00:27:57.489 slat (nsec): min=7997, max=86861, avg=11862.79, stdev=4875.18 00:27:57.489 clat (usec): min=3096, max=17509, avg=4306.86, stdev=1067.34 00:27:57.489 lat (usec): min=3109, max=17537, avg=4318.72, stdev=1067.08 00:27:57.489 clat percentiles (usec): 00:27:57.489 | 1.00th=[ 3982], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4080], 00:27:57.489 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:27:57.489 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 5080], 00:27:57.489 | 99.00th=[ 5997], 99.50th=[16581], 99.90th=[17171], 99.95th=[17433], 00:27:57.489 | 99.99th=[17433] 00:27:57.489 bw ( KiB/s): min=12160, max=15488, per=24.90%, avg=14620.44, stdev=1232.36, samples=9 00:27:57.489 iops : min= 1520, max= 1936, avg=1827.56, stdev=154.04, samples=9 00:27:57.489 lat (msec) : 4=1.13%, 10=98.17%, 20=0.70% 00:27:57.489 cpu : usr=94.20%, sys=4.44%, ctx=8, majf=0, minf=9 00:27:57.489 IO depths : 1=11.5%, 2=25.0%, 4=50.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.489 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.489 issued rwts: total=9168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.489 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.489 filename1: (groupid=0, jobs=1): err= 0: pid=108475: Fri Dec 6 17:27:39 2024 00:27:57.489 read: IOPS=1841, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5001msec) 00:27:57.489 slat (nsec): min=4934, max=52753, avg=10447.73, stdev=4383.93 00:27:57.489 clat (usec): min=1152, max=17540, avg=4291.24, stdev=1099.04 00:27:57.489 lat (usec): min=1162, max=17558, avg=4301.68, stdev=1098.77 00:27:57.489 clat percentiles (usec): 00:27:57.489 | 1.00th=[ 3949], 5.00th=[ 4047], 10.00th=[ 4080], 20.00th=[ 4080], 00:27:57.489 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:27:57.489 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 5080], 00:27:57.489 | 99.00th=[ 6063], 99.50th=[16712], 99.90th=[17433], 99.95th=[17433], 00:27:57.489 | 99.99th=[17433] 00:27:57.489 bw ( KiB/s): min=11904, max=15872, per=25.05%, avg=14705.78, stdev=1291.86, samples=9 00:27:57.489 iops : min= 1488, max= 1984, avg=1838.22, stdev=161.48, samples=9 00:27:57.489 lat (msec) : 2=0.61%, 4=1.89%, 10=96.81%, 20=0.70% 00:27:57.489 cpu : usr=94.22%, sys=4.52%, ctx=12, majf=0, minf=9 00:27:57.489 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:57.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.489 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:57.489 issued rwts: total=9208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:57.489 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:57.489 00:27:57.489 Run status group 0 (all jobs): 00:27:57.489 READ: bw=57.3MiB/s (60.1MB/s), 14.3MiB/s-14.4MiB/s (15.0MB/s-15.1MB/s), io=287MiB (301MB), run=5001-5003msec 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.489 ************************************ 00:27:57.489 END TEST fio_dif_rand_params 00:27:57.489 ************************************ 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.489 00:27:57.489 real 0m23.507s 00:27:57.489 user 2m5.815s 00:27:57.489 sys 0m5.296s 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:57.489 17:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:57.489 17:27:39 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:57.489 17:27:39 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:57.489 17:27:39 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.489 17:27:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:57.489 ************************************ 00:27:57.489 START TEST fio_dif_digest 00:27:57.489 ************************************ 00:27:57.489 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:27:57.489 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:57.489 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:57.489 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:57.489 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:57.489 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.490 bdev_null0 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:57.490 [2024-12-06 17:27:39.549656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:57.490 { 00:27:57.490 "params": { 00:27:57.490 "name": "Nvme$subsystem", 00:27:57.490 "trtype": "$TEST_TRANSPORT", 00:27:57.490 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:57.490 "adrfam": "ipv4", 00:27:57.490 "trsvcid": "$NVMF_PORT", 00:27:57.490 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:57.490 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:57.490 "hdgst": ${hdgst:-false}, 00:27:57.490 "ddgst": ${ddgst:-false} 00:27:57.490 }, 00:27:57.490 "method": "bdev_nvme_attach_controller" 00:27:57.490 } 00:27:57.490 EOF 00:27:57.490 )") 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:57.490 "params": { 00:27:57.490 "name": "Nvme0", 00:27:57.490 "trtype": "tcp", 00:27:57.490 "traddr": "10.0.0.3", 00:27:57.490 "adrfam": "ipv4", 00:27:57.490 "trsvcid": "4420", 00:27:57.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:57.490 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:57.490 "hdgst": true, 00:27:57.490 "ddgst": true 00:27:57.490 }, 00:27:57.490 "method": "bdev_nvme_attach_controller" 00:27:57.490 }' 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:57.490 17:27:39 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:57.490 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:57.490 ... 00:27:57.490 fio-3.35 00:27:57.490 Starting 3 threads 00:28:09.719 00:28:09.719 filename0: (groupid=0, jobs=1): err= 0: pid=108575: Fri Dec 6 17:27:50 2024 00:28:09.719 read: IOPS=234, BW=29.3MiB/s (30.8MB/s)(294MiB/10009msec) 00:28:09.719 slat (nsec): min=5203, max=61224, avg=15045.66, stdev=4236.57 00:28:09.719 clat (usec): min=9588, max=54143, avg=12762.65, stdev=2437.94 00:28:09.719 lat (usec): min=9607, max=54156, avg=12777.70, stdev=2438.02 00:28:09.719 clat percentiles (usec): 00:28:09.719 | 1.00th=[10290], 5.00th=[10814], 10.00th=[11207], 20.00th=[11731], 00:28:09.719 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[12911], 00:28:09.719 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14353], 00:28:09.719 | 99.00th=[18220], 99.50th=[21627], 99.90th=[53216], 99.95th=[53740], 00:28:09.719 | 99.99th=[54264] 00:28:09.719 bw ( KiB/s): min=26880, max=30976, per=39.09%, avg=30035.45, stdev=1163.13, samples=20 00:28:09.719 iops : min= 210, max= 242, avg=234.65, stdev= 9.09, samples=20 00:28:09.719 lat (msec) : 10=0.43%, 20=98.94%, 50=0.38%, 100=0.26% 00:28:09.719 cpu : usr=92.73%, sys=5.69%, ctx=23, majf=0, minf=0 00:28:09.719 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.719 issued rwts: total=2349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.719 filename0: (groupid=0, jobs=1): err= 0: pid=108576: Fri Dec 6 17:27:50 2024 00:28:09.719 read: IOPS=195, BW=24.5MiB/s (25.6MB/s)(245MiB/10007msec) 00:28:09.719 slat (nsec): min=8085, max=59226, avg=14289.79, stdev=4094.73 00:28:09.719 clat (usec): min=7118, max=31320, avg=15311.51, stdev=1637.34 00:28:09.719 lat (usec): min=7131, max=31334, avg=15325.80, stdev=1637.47 00:28:09.719 clat percentiles (usec): 00:28:09.719 | 1.00th=[ 9765], 5.00th=[12911], 10.00th=[13566], 20.00th=[14222], 00:28:09.719 | 30.00th=[14615], 40.00th=[15008], 50.00th=[15401], 60.00th=[15664], 00:28:09.719 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:28:09.719 | 99.00th=[19006], 99.50th=[20579], 99.90th=[29230], 99.95th=[31327], 00:28:09.719 | 99.99th=[31327] 00:28:09.719 bw ( KiB/s): min=23296, max=26368, per=32.56%, avg=25020.63, stdev=728.30, samples=19 00:28:09.719 iops : min= 182, max= 206, avg=195.47, stdev= 5.69, samples=19 00:28:09.719 lat (msec) : 10=1.12%, 20=98.31%, 50=0.56% 00:28:09.719 cpu : usr=92.72%, sys=5.87%, ctx=37, majf=0, minf=0 00:28:09.719 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.719 issued rwts: total=1958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.719 filename0: (groupid=0, jobs=1): err= 0: pid=108577: Fri Dec 6 17:27:50 2024 00:28:09.719 read: IOPS=169, BW=21.2MiB/s (22.3MB/s)(213MiB/10007msec) 00:28:09.719 slat (nsec): min=8182, max=43819, avg=12211.08, stdev=5338.07 00:28:09.719 clat (usec): min=9111, max=34472, avg=17617.10, stdev=1559.25 00:28:09.719 lat (usec): min=9120, max=34483, avg=17629.31, stdev=1559.44 00:28:09.719 clat percentiles (usec): 00:28:09.719 | 1.00th=[11731], 5.00th=[16319], 10.00th=[16581], 20.00th=[16909], 00:28:09.719 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17695], 00:28:09.719 | 70.00th=[17957], 80.00th=[18220], 90.00th=[18744], 95.00th=[19006], 00:28:09.719 | 99.00th=[23987], 99.50th=[26870], 99.90th=[30540], 99.95th=[34341], 00:28:09.719 | 99.99th=[34341] 00:28:09.719 bw ( KiB/s): min=19200, max=22272, per=28.30%, avg=21746.53, stdev=726.46, samples=19 00:28:09.719 iops : min= 150, max= 174, avg=169.89, stdev= 5.68, samples=19 00:28:09.719 lat (msec) : 10=0.18%, 20=98.41%, 50=1.41% 00:28:09.719 cpu : usr=92.84%, sys=5.71%, ctx=23, majf=0, minf=0 00:28:09.719 IO depths : 1=33.2%, 2=66.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:09.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:09.719 issued rwts: total=1701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:09.719 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:09.719 00:28:09.719 Run status group 0 (all jobs): 00:28:09.719 READ: bw=75.0MiB/s (78.7MB/s), 21.2MiB/s-29.3MiB/s (22.3MB/s-30.8MB/s), io=751MiB (787MB), run=10007-10009msec 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.719 ************************************ 00:28:09.719 END TEST fio_dif_digest 00:28:09.719 ************************************ 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.719 00:28:09.719 real 0m10.954s 00:28:09.719 user 0m28.459s 00:28:09.719 sys 0m1.966s 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.719 17:27:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:09.719 17:27:50 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:09.719 17:27:50 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:09.719 17:27:50 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:09.719 17:27:50 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:28:09.719 17:27:50 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.719 17:27:50 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:28:09.719 17:27:50 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.720 17:27:50 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.720 rmmod nvme_tcp 00:28:09.720 rmmod nvme_fabrics 00:28:09.720 rmmod nvme_keyring 00:28:09.720 17:27:50 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.720 17:27:50 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:28:09.720 17:27:50 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:28:09.720 17:27:50 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 107845 ']' 00:28:09.720 17:27:50 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 107845 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 107845 ']' 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 107845 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107845 00:28:09.720 killing process with pid 107845 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107845' 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@973 -- # kill 107845 00:28:09.720 17:27:50 nvmf_dif -- common/autotest_common.sh@978 -- # wait 107845 00:28:09.720 17:27:50 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:09.720 17:27:50 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:09.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:09.720 Waiting for block devices as requested 00:28:09.720 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:09.720 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.720 17:27:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:09.720 17:27:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.720 17:27:51 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:28:09.720 00:28:09.720 real 0m59.218s 00:28:09.720 user 3m51.520s 00:28:09.720 sys 0m14.145s 00:28:09.720 17:27:51 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.720 17:27:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:09.720 ************************************ 00:28:09.720 END TEST nvmf_dif 00:28:09.720 ************************************ 00:28:09.720 17:27:51 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:09.720 17:27:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:09.720 17:27:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.720 17:27:51 -- common/autotest_common.sh@10 -- # set +x 00:28:09.720 ************************************ 00:28:09.720 START TEST nvmf_abort_qd_sizes 00:28:09.720 ************************************ 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:09.720 * Looking for test storage... 00:28:09.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.720 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:09.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.720 --rc genhtml_branch_coverage=1 00:28:09.720 --rc genhtml_function_coverage=1 00:28:09.720 --rc genhtml_legend=1 00:28:09.721 --rc geninfo_all_blocks=1 00:28:09.721 --rc geninfo_unexecuted_blocks=1 00:28:09.721 00:28:09.721 ' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.721 --rc genhtml_branch_coverage=1 00:28:09.721 --rc genhtml_function_coverage=1 00:28:09.721 --rc genhtml_legend=1 00:28:09.721 --rc geninfo_all_blocks=1 00:28:09.721 --rc geninfo_unexecuted_blocks=1 00:28:09.721 00:28:09.721 ' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.721 --rc genhtml_branch_coverage=1 00:28:09.721 --rc genhtml_function_coverage=1 00:28:09.721 --rc genhtml_legend=1 00:28:09.721 --rc geninfo_all_blocks=1 00:28:09.721 --rc geninfo_unexecuted_blocks=1 00:28:09.721 00:28:09.721 ' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:09.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.721 --rc genhtml_branch_coverage=1 00:28:09.721 --rc genhtml_function_coverage=1 00:28:09.721 --rc genhtml_legend=1 00:28:09.721 --rc geninfo_all_blocks=1 00:28:09.721 --rc geninfo_unexecuted_blocks=1 00:28:09.721 00:28:09.721 ' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:09.721 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:09.721 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:09.722 Cannot find device "nvmf_init_br" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:09.722 Cannot find device "nvmf_init_br2" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:09.722 Cannot find device "nvmf_tgt_br" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:09.722 Cannot find device "nvmf_tgt_br2" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:09.722 Cannot find device "nvmf_init_br" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:09.722 Cannot find device "nvmf_init_br2" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:09.722 Cannot find device "nvmf_tgt_br" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:09.722 Cannot find device "nvmf_tgt_br2" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:09.722 Cannot find device "nvmf_br" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:09.722 Cannot find device "nvmf_init_if" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:09.722 Cannot find device "nvmf_init_if2" 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:09.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:09.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:09.722 17:27:51 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:09.722 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:09.722 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:09.981 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:09.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:09.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:28:09.982 00:28:09.982 --- 10.0.0.3 ping statistics --- 00:28:09.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.982 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:09.982 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:09.982 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:28:09.982 00:28:09.982 --- 10.0.0.4 ping statistics --- 00:28:09.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.982 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:09.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:28:09.982 00:28:09.982 --- 10.0.0.1 ping statistics --- 00:28:09.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.982 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:09.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.093 ms 00:28:09.982 00:28:09.982 --- 10.0.0.2 ping statistics --- 00:28:09.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.982 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:28:09.982 17:27:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:10.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:10.918 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:10.918 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=109223 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 109223 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 109223 ']' 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:10.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:10.918 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:10.918 [2024-12-06 17:27:53.183205] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:28:10.918 [2024-12-06 17:27:53.183344] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.177 [2024-12-06 17:27:53.336634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:11.177 [2024-12-06 17:27:53.382963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:11.177 [2024-12-06 17:27:53.383283] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:11.177 [2024-12-06 17:27:53.383530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:11.177 [2024-12-06 17:27:53.383822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:11.177 [2024-12-06 17:27:53.384023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:11.177 [2024-12-06 17:27:53.385320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.177 [2024-12-06 17:27:53.385384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:11.177 [2024-12-06 17:27:53.385474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.177 [2024-12-06 17:27:53.385465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.436 17:27:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:11.436 ************************************ 00:28:11.436 START TEST spdk_target_abort 00:28:11.436 ************************************ 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.436 spdk_targetn1 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.436 [2024-12-06 17:27:53.692822] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.436 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:11.695 [2024-12-06 17:27:53.733561] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:11.695 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:11.696 17:27:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:14.980 Initializing NVMe Controllers 00:28:14.980 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:14.980 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:14.980 Initialization complete. Launching workers. 00:28:14.980 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11322, failed: 0 00:28:14.980 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1069, failed to submit 10253 00:28:14.980 success 766, unsuccessful 303, failed 0 00:28:14.980 17:27:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:14.980 17:27:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:18.262 Initializing NVMe Controllers 00:28:18.262 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:18.262 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:18.262 Initialization complete. Launching workers. 00:28:18.262 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5983, failed: 0 00:28:18.262 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 4743 00:28:18.262 success 281, unsuccessful 959, failed 0 00:28:18.262 17:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:18.262 17:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:21.544 Initializing NVMe Controllers 00:28:21.544 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:21.544 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:21.544 Initialization complete. Launching workers. 00:28:21.544 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30413, failed: 0 00:28:21.544 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2759, failed to submit 27654 00:28:21.544 success 366, unsuccessful 2393, failed 0 00:28:21.544 17:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:21.544 17:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.544 17:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:21.544 17:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.544 17:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:21.544 17:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.544 17:28:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 109223 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 109223 ']' 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 109223 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109223 00:28:22.472 killing process with pid 109223 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109223' 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 109223 00:28:22.472 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 109223 00:28:22.729 00:28:22.729 real 0m11.254s 00:28:22.729 user 0m42.578s 00:28:22.729 sys 0m1.692s 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:22.729 ************************************ 00:28:22.729 END TEST spdk_target_abort 00:28:22.729 ************************************ 00:28:22.729 17:28:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:22.729 17:28:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:22.729 17:28:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.729 17:28:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:22.729 ************************************ 00:28:22.729 START TEST kernel_target_abort 00:28:22.729 ************************************ 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:22.729 17:28:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:22.984 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:23.240 Waiting for block devices as requested 00:28:23.240 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:23.240 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:23.240 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:23.496 No valid GPT data, bailing 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:23.496 No valid GPT data, bailing 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:23.496 No valid GPT data, bailing 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:23.496 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:23.497 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:23.497 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:23.497 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:23.497 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:23.497 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:23.497 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:23.497 No valid GPT data, bailing 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 --hostid=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 -a 10.0.0.1 -t tcp -s 4420 00:28:23.758 00:28:23.758 Discovery Log Number of Records 2, Generation counter 2 00:28:23.758 =====Discovery Log Entry 0====== 00:28:23.758 trtype: tcp 00:28:23.758 adrfam: ipv4 00:28:23.758 subtype: current discovery subsystem 00:28:23.758 treq: not specified, sq flow control disable supported 00:28:23.758 portid: 1 00:28:23.758 trsvcid: 4420 00:28:23.758 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:23.758 traddr: 10.0.0.1 00:28:23.758 eflags: none 00:28:23.758 sectype: none 00:28:23.758 =====Discovery Log Entry 1====== 00:28:23.758 trtype: tcp 00:28:23.758 adrfam: ipv4 00:28:23.758 subtype: nvme subsystem 00:28:23.758 treq: not specified, sq flow control disable supported 00:28:23.758 portid: 1 00:28:23.758 trsvcid: 4420 00:28:23.758 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:23.758 traddr: 10.0.0.1 00:28:23.758 eflags: none 00:28:23.758 sectype: none 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:23.758 17:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.046 Initializing NVMe Controllers 00:28:27.046 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:27.046 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:27.046 Initialization complete. Launching workers. 00:28:27.046 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34090, failed: 0 00:28:27.046 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34090, failed to submit 0 00:28:27.046 success 0, unsuccessful 34090, failed 0 00:28:27.046 17:28:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.046 17:28:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:30.331 Initializing NVMe Controllers 00:28:30.331 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:30.331 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:30.331 Initialization complete. Launching workers. 00:28:30.331 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67644, failed: 0 00:28:30.331 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29526, failed to submit 38118 00:28:30.332 success 0, unsuccessful 29526, failed 0 00:28:30.332 17:28:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:30.332 17:28:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:33.628 Initializing NVMe Controllers 00:28:33.628 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:33.628 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:33.628 Initialization complete. Launching workers. 00:28:33.628 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76531, failed: 0 00:28:33.628 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19086, failed to submit 57445 00:28:33.628 success 0, unsuccessful 19086, failed 0 00:28:33.628 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:33.628 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:33.628 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:33.628 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:33.629 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:33.629 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:33.629 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:33.629 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:33.629 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:33.629 17:28:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:33.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:35.787 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:35.787 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:35.787 00:28:35.787 real 0m13.079s 00:28:35.787 user 0m6.432s 00:28:35.787 sys 0m4.042s 00:28:35.787 17:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.787 17:28:17 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.787 ************************************ 00:28:35.787 END TEST kernel_target_abort 00:28:35.787 ************************************ 00:28:35.787 17:28:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:35.787 17:28:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:35.787 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:35.787 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:36.046 rmmod nvme_tcp 00:28:36.046 rmmod nvme_fabrics 00:28:36.046 rmmod nvme_keyring 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 109223 ']' 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 109223 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 109223 ']' 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 109223 00:28:36.046 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (109223) - No such process 00:28:36.046 Process with pid 109223 is not found 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 109223 is not found' 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:36.046 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:36.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:36.304 Waiting for block devices as requested 00:28:36.304 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:36.563 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:36.563 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:36.823 00:28:36.823 real 0m27.359s 00:28:36.823 user 0m50.210s 00:28:36.823 sys 0m7.103s 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.823 ************************************ 00:28:36.823 END TEST nvmf_abort_qd_sizes 00:28:36.823 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:36.823 ************************************ 00:28:36.823 17:28:19 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:36.823 17:28:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:36.823 17:28:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.823 17:28:19 -- common/autotest_common.sh@10 -- # set +x 00:28:36.823 ************************************ 00:28:36.823 START TEST keyring_file 00:28:36.823 ************************************ 00:28:36.823 17:28:19 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:36.823 * Looking for test storage... 00:28:36.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:36.823 17:28:19 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:36.823 17:28:19 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:28:36.823 17:28:19 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:37.083 17:28:19 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:37.083 17:28:19 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:37.084 17:28:19 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:37.084 17:28:19 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:37.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.084 --rc genhtml_branch_coverage=1 00:28:37.084 --rc genhtml_function_coverage=1 00:28:37.084 --rc genhtml_legend=1 00:28:37.084 --rc geninfo_all_blocks=1 00:28:37.084 --rc geninfo_unexecuted_blocks=1 00:28:37.084 00:28:37.084 ' 00:28:37.084 17:28:19 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:37.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.084 --rc genhtml_branch_coverage=1 00:28:37.084 --rc genhtml_function_coverage=1 00:28:37.084 --rc genhtml_legend=1 00:28:37.084 --rc geninfo_all_blocks=1 00:28:37.084 --rc geninfo_unexecuted_blocks=1 00:28:37.084 00:28:37.084 ' 00:28:37.084 17:28:19 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:37.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.084 --rc genhtml_branch_coverage=1 00:28:37.084 --rc genhtml_function_coverage=1 00:28:37.084 --rc genhtml_legend=1 00:28:37.084 --rc geninfo_all_blocks=1 00:28:37.084 --rc geninfo_unexecuted_blocks=1 00:28:37.084 00:28:37.084 ' 00:28:37.084 17:28:19 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:37.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:37.084 --rc genhtml_branch_coverage=1 00:28:37.084 --rc genhtml_function_coverage=1 00:28:37.084 --rc genhtml_legend=1 00:28:37.084 --rc geninfo_all_blocks=1 00:28:37.084 --rc geninfo_unexecuted_blocks=1 00:28:37.084 00:28:37.084 ' 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:37.084 17:28:19 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:37.084 17:28:19 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.084 17:28:19 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.084 17:28:19 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.084 17:28:19 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:37.084 17:28:19 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:37.084 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Lmks8YIQUH 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Lmks8YIQUH 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Lmks8YIQUH 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Lmks8YIQUH 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mw1GF1imPu 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:37.084 17:28:19 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mw1GF1imPu 00:28:37.084 17:28:19 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mw1GF1imPu 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mw1GF1imPu 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@30 -- # tgtpid=110136 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@32 -- # waitforlisten 110136 00:28:37.084 17:28:19 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:37.085 17:28:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110136 ']' 00:28:37.085 17:28:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.085 17:28:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.085 17:28:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.085 17:28:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.085 17:28:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:37.344 [2024-12-06 17:28:19.387125] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:28:37.344 [2024-12-06 17:28:19.387216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110136 ] 00:28:37.344 [2024-12-06 17:28:19.535562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.344 [2024-12-06 17:28:19.574561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:37.603 17:28:19 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:37.603 [2024-12-06 17:28:19.771153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.603 null0 00:28:37.603 [2024-12-06 17:28:19.803106] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:37.603 [2024-12-06 17:28:19.803336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.603 17:28:19 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:37.603 [2024-12-06 17:28:19.835128] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:37.603 2024/12/06 17:28:19 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:28:37.603 request: 00:28:37.603 { 00:28:37.603 "method": "nvmf_subsystem_add_listener", 00:28:37.603 "params": { 00:28:37.603 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:37.603 "secure_channel": false, 00:28:37.603 "listen_address": { 00:28:37.603 "trtype": "tcp", 00:28:37.603 "traddr": "127.0.0.1", 00:28:37.603 "trsvcid": "4420" 00:28:37.603 } 00:28:37.603 } 00:28:37.603 } 00:28:37.603 Got JSON-RPC error response 00:28:37.603 GoRPCClient: error on JSON-RPC call 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:37.603 17:28:19 keyring_file -- keyring/file.sh@47 -- # bperfpid=110154 00:28:37.603 17:28:19 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:37.603 17:28:19 keyring_file -- keyring/file.sh@49 -- # waitforlisten 110154 /var/tmp/bperf.sock 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110154 ']' 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.603 17:28:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:37.862 [2024-12-06 17:28:19.905761] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:28:37.862 [2024-12-06 17:28:19.905864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110154 ] 00:28:37.862 [2024-12-06 17:28:20.058827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.862 [2024-12-06 17:28:20.101527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.795 17:28:20 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.795 17:28:20 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:38.795 17:28:20 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lmks8YIQUH 00:28:38.795 17:28:20 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Lmks8YIQUH 00:28:39.053 17:28:21 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mw1GF1imPu 00:28:39.053 17:28:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mw1GF1imPu 00:28:39.312 17:28:21 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:39.312 17:28:21 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:39.312 17:28:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:39.312 17:28:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:39.312 17:28:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:39.880 17:28:21 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Lmks8YIQUH == \/\t\m\p\/\t\m\p\.\L\m\k\s\8\Y\I\Q\U\H ]] 00:28:39.880 17:28:21 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:39.880 17:28:21 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:39.880 17:28:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:39.880 17:28:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:39.880 17:28:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:40.137 17:28:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.mw1GF1imPu == \/\t\m\p\/\t\m\p\.\m\w\1\G\F\1\i\m\P\u ]] 00:28:40.137 17:28:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:40.137 17:28:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:40.137 17:28:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:40.137 17:28:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:40.137 17:28:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.137 17:28:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:40.394 17:28:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:40.394 17:28:22 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:40.394 17:28:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:40.394 17:28:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:40.394 17:28:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:40.394 17:28:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.394 17:28:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:40.651 17:28:22 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:40.651 17:28:22 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:40.651 17:28:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:40.909 [2024-12-06 17:28:23.107680] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:40.909 nvme0n1 00:28:40.909 17:28:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:40.909 17:28:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:40.909 17:28:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:40.909 17:28:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:40.909 17:28:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.909 17:28:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:41.476 17:28:23 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:41.476 17:28:23 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:41.476 17:28:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:41.476 17:28:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:41.476 17:28:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:41.476 17:28:23 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:41.476 17:28:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:41.746 17:28:23 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:41.746 17:28:23 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:41.746 Running I/O for 1 seconds... 00:28:42.736 11079.00 IOPS, 43.28 MiB/s 00:28:42.736 Latency(us) 00:28:42.736 [2024-12-06T17:28:25.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.736 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:42.736 nvme0n1 : 1.01 11133.15 43.49 0.00 0.00 11466.59 3991.74 16920.20 00:28:42.736 [2024-12-06T17:28:25.034Z] =================================================================================================================== 00:28:42.736 [2024-12-06T17:28:25.034Z] Total : 11133.15 43.49 0.00 0.00 11466.59 3991.74 16920.20 00:28:42.736 { 00:28:42.736 "results": [ 00:28:42.736 { 00:28:42.736 "job": "nvme0n1", 00:28:42.736 "core_mask": "0x2", 00:28:42.736 "workload": "randrw", 00:28:42.736 "percentage": 50, 00:28:42.736 "status": "finished", 00:28:42.736 "queue_depth": 128, 00:28:42.736 "io_size": 4096, 00:28:42.736 "runtime": 1.006633, 00:28:42.736 "iops": 11133.153790904928, 00:28:42.736 "mibps": 43.48888199572237, 00:28:42.736 "io_failed": 0, 00:28:42.736 "io_timeout": 0, 00:28:42.736 "avg_latency_us": 11466.593794462877, 00:28:42.736 "min_latency_us": 3991.7381818181816, 00:28:42.736 "max_latency_us": 16920.203636363636 00:28:42.736 } 00:28:42.736 ], 00:28:42.736 "core_count": 1 00:28:42.736 } 00:28:42.736 17:28:24 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:42.736 17:28:24 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:43.007 17:28:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:43.007 17:28:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:43.007 17:28:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:43.007 17:28:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:43.007 17:28:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:43.007 17:28:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.573 17:28:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:43.573 17:28:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:43.573 17:28:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:43.573 17:28:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:43.573 17:28:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:43.573 17:28:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:43.573 17:28:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.830 17:28:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:43.830 17:28:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:43.830 17:28:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:43.830 17:28:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:43.830 17:28:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:43.830 17:28:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.830 17:28:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:43.830 17:28:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.830 17:28:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:43.830 17:28:25 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:44.088 [2024-12-06 17:28:26.258651] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:44.088 [2024-12-06 17:28:26.259251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a05010 (107): Transport endpoint is not connected 00:28:44.088 [2024-12-06 17:28:26.260236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a05010 (9): Bad file descriptor 00:28:44.088 [2024-12-06 17:28:26.261231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:44.088 [2024-12-06 17:28:26.261255] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:44.088 [2024-12-06 17:28:26.261279] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:44.088 [2024-12-06 17:28:26.261294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:44.088 2024/12/06 17:28:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:44.088 request: 00:28:44.088 { 00:28:44.088 "method": "bdev_nvme_attach_controller", 00:28:44.088 "params": { 00:28:44.088 "name": "nvme0", 00:28:44.088 "trtype": "tcp", 00:28:44.088 "traddr": "127.0.0.1", 00:28:44.088 "adrfam": "ipv4", 00:28:44.088 "trsvcid": "4420", 00:28:44.088 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:44.088 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:44.088 "prchk_reftag": false, 00:28:44.088 "prchk_guard": false, 00:28:44.088 "hdgst": false, 00:28:44.088 "ddgst": false, 00:28:44.088 "psk": "key1", 00:28:44.088 "allow_unrecognized_csi": false 00:28:44.088 } 00:28:44.088 } 00:28:44.088 Got JSON-RPC error response 00:28:44.088 GoRPCClient: error on JSON-RPC call 00:28:44.088 17:28:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:44.088 17:28:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:44.088 17:28:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:44.088 17:28:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:44.088 17:28:26 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:44.088 17:28:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:44.088 17:28:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:44.088 17:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:44.088 17:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:44.088 17:28:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:44.347 17:28:26 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:44.347 17:28:26 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:44.347 17:28:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:44.347 17:28:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:44.347 17:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:44.347 17:28:26 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:44.347 17:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:44.913 17:28:27 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:44.913 17:28:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:44.913 17:28:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:45.173 17:28:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:45.173 17:28:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:45.431 17:28:27 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:45.431 17:28:27 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:45.431 17:28:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:45.997 17:28:28 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:45.997 17:28:28 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Lmks8YIQUH 00:28:45.997 17:28:28 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lmks8YIQUH 00:28:45.997 17:28:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:45.997 17:28:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lmks8YIQUH 00:28:45.997 17:28:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:45.997 17:28:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:45.998 17:28:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:45.998 17:28:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:45.998 17:28:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lmks8YIQUH 00:28:45.998 17:28:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Lmks8YIQUH 00:28:45.998 [2024-12-06 17:28:28.267392] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Lmks8YIQUH': 0100660 00:28:45.998 [2024-12-06 17:28:28.267441] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:45.998 2024/12/06 17:28:28 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.Lmks8YIQUH], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:28:45.998 request: 00:28:45.998 { 00:28:45.998 "method": "keyring_file_add_key", 00:28:45.998 "params": { 00:28:45.998 "name": "key0", 00:28:45.998 "path": "/tmp/tmp.Lmks8YIQUH" 00:28:45.998 } 00:28:45.998 } 00:28:45.998 Got JSON-RPC error response 00:28:45.998 GoRPCClient: error on JSON-RPC call 00:28:45.998 17:28:28 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:45.998 17:28:28 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:45.998 17:28:28 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:45.998 17:28:28 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:45.998 17:28:28 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Lmks8YIQUH 00:28:45.998 17:28:28 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Lmks8YIQUH 00:28:45.998 17:28:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Lmks8YIQUH 00:28:46.565 17:28:28 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Lmks8YIQUH 00:28:46.565 17:28:28 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:46.565 17:28:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:46.565 17:28:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:46.565 17:28:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:46.565 17:28:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:46.565 17:28:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:46.824 17:28:28 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:46.824 17:28:28 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:46.824 17:28:28 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:46.824 17:28:28 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:46.824 17:28:28 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:46.824 17:28:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:46.824 17:28:28 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:46.824 17:28:28 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:46.824 17:28:28 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:46.824 17:28:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:47.082 [2024-12-06 17:28:29.247710] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Lmks8YIQUH': No such file or directory 00:28:47.082 [2024-12-06 17:28:29.247759] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:47.082 [2024-12-06 17:28:29.247781] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:47.082 [2024-12-06 17:28:29.247791] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:47.082 [2024-12-06 17:28:29.247801] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:47.082 [2024-12-06 17:28:29.247809] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:47.082 2024/12/06 17:28:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:28:47.082 request: 00:28:47.082 { 00:28:47.082 "method": "bdev_nvme_attach_controller", 00:28:47.082 "params": { 00:28:47.082 "name": "nvme0", 00:28:47.082 "trtype": "tcp", 00:28:47.082 "traddr": "127.0.0.1", 00:28:47.082 "adrfam": "ipv4", 00:28:47.082 "trsvcid": "4420", 00:28:47.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:47.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:47.082 "prchk_reftag": false, 00:28:47.082 "prchk_guard": false, 00:28:47.082 "hdgst": false, 00:28:47.083 "ddgst": false, 00:28:47.083 "psk": "key0", 00:28:47.083 "allow_unrecognized_csi": false 00:28:47.083 } 00:28:47.083 } 00:28:47.083 Got JSON-RPC error response 00:28:47.083 GoRPCClient: error on JSON-RPC call 00:28:47.083 17:28:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:47.083 17:28:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:47.083 17:28:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:47.083 17:28:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:47.083 17:28:29 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:47.083 17:28:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:47.341 17:28:29 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:47.341 17:28:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:47.341 17:28:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:47.341 17:28:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:47.341 17:28:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:47.341 17:28:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:47.341 17:28:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SR1MVypobR 00:28:47.341 17:28:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:47.341 17:28:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:47.341 17:28:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:47.341 17:28:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:47.341 17:28:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:47.341 17:28:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:47.341 17:28:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:47.599 17:28:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SR1MVypobR 00:28:47.599 17:28:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SR1MVypobR 00:28:47.599 17:28:29 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.SR1MVypobR 00:28:47.599 17:28:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SR1MVypobR 00:28:47.599 17:28:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SR1MVypobR 00:28:47.857 17:28:29 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:47.857 17:28:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:48.115 nvme0n1 00:28:48.115 17:28:30 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:48.115 17:28:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:48.116 17:28:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:48.116 17:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:48.116 17:28:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.116 17:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:48.374 17:28:30 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:48.374 17:28:30 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:48.374 17:28:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:48.941 17:28:30 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:48.941 17:28:30 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:48.941 17:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:48.941 17:28:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:48.941 17:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:49.199 17:28:31 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:28:49.199 17:28:31 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:28:49.199 17:28:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:49.199 17:28:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:49.199 17:28:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:49.199 17:28:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.199 17:28:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:49.459 17:28:31 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:28:49.459 17:28:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:49.459 17:28:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:49.717 17:28:31 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:28:49.717 17:28:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:49.717 17:28:31 keyring_file -- keyring/file.sh@105 -- # jq length 00:28:49.976 17:28:32 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:28:49.976 17:28:32 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SR1MVypobR 00:28:49.976 17:28:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SR1MVypobR 00:28:50.543 17:28:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mw1GF1imPu 00:28:50.543 17:28:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mw1GF1imPu 00:28:50.802 17:28:33 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:50.802 17:28:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:51.060 nvme0n1 00:28:51.318 17:28:33 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:28:51.318 17:28:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:51.578 17:28:33 keyring_file -- keyring/file.sh@113 -- # config='{ 00:28:51.578 "subsystems": [ 00:28:51.578 { 00:28:51.578 "subsystem": "keyring", 00:28:51.578 "config": [ 00:28:51.578 { 00:28:51.578 "method": "keyring_file_add_key", 00:28:51.578 "params": { 00:28:51.578 "name": "key0", 00:28:51.578 "path": "/tmp/tmp.SR1MVypobR" 00:28:51.578 } 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "method": "keyring_file_add_key", 00:28:51.578 "params": { 00:28:51.578 "name": "key1", 00:28:51.578 "path": "/tmp/tmp.mw1GF1imPu" 00:28:51.578 } 00:28:51.578 } 00:28:51.578 ] 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "subsystem": "iobuf", 00:28:51.578 "config": [ 00:28:51.578 { 00:28:51.578 "method": "iobuf_set_options", 00:28:51.578 "params": { 00:28:51.578 "enable_numa": false, 00:28:51.578 "large_bufsize": 135168, 00:28:51.578 "large_pool_count": 1024, 00:28:51.578 "small_bufsize": 8192, 00:28:51.578 "small_pool_count": 8192 00:28:51.578 } 00:28:51.578 } 00:28:51.578 ] 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "subsystem": "sock", 00:28:51.578 "config": [ 00:28:51.578 { 00:28:51.578 "method": "sock_set_default_impl", 00:28:51.578 "params": { 00:28:51.578 "impl_name": "posix" 00:28:51.578 } 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "method": "sock_impl_set_options", 00:28:51.578 "params": { 00:28:51.578 "enable_ktls": false, 00:28:51.578 "enable_placement_id": 0, 00:28:51.578 "enable_quickack": false, 00:28:51.578 "enable_recv_pipe": true, 00:28:51.578 "enable_zerocopy_send_client": false, 00:28:51.578 "enable_zerocopy_send_server": true, 00:28:51.578 "impl_name": "ssl", 00:28:51.578 "recv_buf_size": 4096, 00:28:51.578 "send_buf_size": 4096, 00:28:51.578 "tls_version": 0, 00:28:51.578 "zerocopy_threshold": 0 00:28:51.578 } 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "method": "sock_impl_set_options", 00:28:51.578 "params": { 00:28:51.578 "enable_ktls": false, 00:28:51.578 "enable_placement_id": 0, 00:28:51.578 "enable_quickack": false, 00:28:51.578 "enable_recv_pipe": true, 00:28:51.578 "enable_zerocopy_send_client": false, 00:28:51.578 "enable_zerocopy_send_server": true, 00:28:51.578 "impl_name": "posix", 00:28:51.578 "recv_buf_size": 2097152, 00:28:51.578 "send_buf_size": 2097152, 00:28:51.578 "tls_version": 0, 00:28:51.578 "zerocopy_threshold": 0 00:28:51.578 } 00:28:51.578 } 00:28:51.578 ] 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "subsystem": "vmd", 00:28:51.578 "config": [] 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "subsystem": "accel", 00:28:51.578 "config": [ 00:28:51.578 { 00:28:51.578 "method": "accel_set_options", 00:28:51.578 "params": { 00:28:51.578 "buf_count": 2048, 00:28:51.578 "large_cache_size": 16, 00:28:51.578 "sequence_count": 2048, 00:28:51.578 "small_cache_size": 128, 00:28:51.578 "task_count": 2048 00:28:51.578 } 00:28:51.578 } 00:28:51.578 ] 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "subsystem": "bdev", 00:28:51.578 "config": [ 00:28:51.578 { 00:28:51.578 "method": "bdev_set_options", 00:28:51.578 "params": { 00:28:51.578 "bdev_auto_examine": true, 00:28:51.578 "bdev_io_cache_size": 256, 00:28:51.578 "bdev_io_pool_size": 65535, 00:28:51.578 "iobuf_large_cache_size": 16, 00:28:51.578 "iobuf_small_cache_size": 128 00:28:51.578 } 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "method": "bdev_raid_set_options", 00:28:51.578 "params": { 00:28:51.578 "process_max_bandwidth_mb_sec": 0, 00:28:51.578 "process_window_size_kb": 1024 00:28:51.578 } 00:28:51.578 }, 00:28:51.578 { 00:28:51.578 "method": "bdev_iscsi_set_options", 00:28:51.578 "params": { 00:28:51.578 "timeout_sec": 30 00:28:51.578 } 00:28:51.579 }, 00:28:51.579 { 00:28:51.579 "method": "bdev_nvme_set_options", 00:28:51.579 "params": { 00:28:51.579 "action_on_timeout": "none", 00:28:51.579 "allow_accel_sequence": false, 00:28:51.579 "arbitration_burst": 0, 00:28:51.579 "bdev_retry_count": 3, 00:28:51.579 "ctrlr_loss_timeout_sec": 0, 00:28:51.579 "delay_cmd_submit": true, 00:28:51.579 "dhchap_dhgroups": [ 00:28:51.579 "null", 00:28:51.579 "ffdhe2048", 00:28:51.579 "ffdhe3072", 00:28:51.579 "ffdhe4096", 00:28:51.579 "ffdhe6144", 00:28:51.579 "ffdhe8192" 00:28:51.579 ], 00:28:51.579 "dhchap_digests": [ 00:28:51.579 "sha256", 00:28:51.579 "sha384", 00:28:51.579 "sha512" 00:28:51.579 ], 00:28:51.579 "disable_auto_failback": false, 00:28:51.579 "fast_io_fail_timeout_sec": 0, 00:28:51.579 "generate_uuids": false, 00:28:51.579 "high_priority_weight": 0, 00:28:51.579 "io_path_stat": false, 00:28:51.579 "io_queue_requests": 512, 00:28:51.579 "keep_alive_timeout_ms": 10000, 00:28:51.579 "low_priority_weight": 0, 00:28:51.579 "medium_priority_weight": 0, 00:28:51.579 "nvme_adminq_poll_period_us": 10000, 00:28:51.579 "nvme_error_stat": false, 00:28:51.579 "nvme_ioq_poll_period_us": 0, 00:28:51.579 "rdma_cm_event_timeout_ms": 0, 00:28:51.579 "rdma_max_cq_size": 0, 00:28:51.579 "rdma_srq_size": 0, 00:28:51.579 "reconnect_delay_sec": 0, 00:28:51.579 "timeout_admin_us": 0, 00:28:51.579 "timeout_us": 0, 00:28:51.579 "transport_ack_timeout": 0, 00:28:51.579 "transport_retry_count": 4, 00:28:51.579 "transport_tos": 0 00:28:51.579 } 00:28:51.579 }, 00:28:51.579 { 00:28:51.579 "method": "bdev_nvme_attach_controller", 00:28:51.579 "params": { 00:28:51.579 "adrfam": "IPv4", 00:28:51.579 "ctrlr_loss_timeout_sec": 0, 00:28:51.579 "ddgst": false, 00:28:51.579 "fast_io_fail_timeout_sec": 0, 00:28:51.579 "hdgst": false, 00:28:51.579 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:51.579 "multipath": "multipath", 00:28:51.579 "name": "nvme0", 00:28:51.579 "prchk_guard": false, 00:28:51.579 "prchk_reftag": false, 00:28:51.579 "psk": "key0", 00:28:51.579 "reconnect_delay_sec": 0, 00:28:51.579 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.579 "traddr": "127.0.0.1", 00:28:51.579 "trsvcid": "4420", 00:28:51.579 "trtype": "TCP" 00:28:51.579 } 00:28:51.579 }, 00:28:51.579 { 00:28:51.579 "method": "bdev_nvme_set_hotplug", 00:28:51.579 "params": { 00:28:51.579 "enable": false, 00:28:51.579 "period_us": 100000 00:28:51.579 } 00:28:51.579 }, 00:28:51.579 { 00:28:51.579 "method": "bdev_wait_for_examine" 00:28:51.579 } 00:28:51.579 ] 00:28:51.579 }, 00:28:51.579 { 00:28:51.579 "subsystem": "nbd", 00:28:51.579 "config": [] 00:28:51.579 } 00:28:51.579 ] 00:28:51.579 }' 00:28:51.579 17:28:33 keyring_file -- keyring/file.sh@115 -- # killprocess 110154 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110154 ']' 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110154 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110154 00:28:51.579 killing process with pid 110154 00:28:51.579 Received shutdown signal, test time was about 1.000000 seconds 00:28:51.579 00:28:51.579 Latency(us) 00:28:51.579 [2024-12-06T17:28:33.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:51.579 [2024-12-06T17:28:33.877Z] =================================================================================================================== 00:28:51.579 [2024-12-06T17:28:33.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110154' 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@973 -- # kill 110154 00:28:51.579 17:28:33 keyring_file -- common/autotest_common.sh@978 -- # wait 110154 00:28:51.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:51.845 17:28:33 keyring_file -- keyring/file.sh@118 -- # bperfpid=110646 00:28:51.845 17:28:33 keyring_file -- keyring/file.sh@120 -- # waitforlisten 110646 /var/tmp/bperf.sock 00:28:51.845 17:28:33 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110646 ']' 00:28:51.845 17:28:33 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:51.845 17:28:33 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.845 17:28:33 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:51.845 17:28:33 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:51.845 17:28:33 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.845 17:28:33 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:28:51.845 "subsystems": [ 00:28:51.845 { 00:28:51.845 "subsystem": "keyring", 00:28:51.845 "config": [ 00:28:51.845 { 00:28:51.845 "method": "keyring_file_add_key", 00:28:51.845 "params": { 00:28:51.845 "name": "key0", 00:28:51.845 "path": "/tmp/tmp.SR1MVypobR" 00:28:51.845 } 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "method": "keyring_file_add_key", 00:28:51.845 "params": { 00:28:51.845 "name": "key1", 00:28:51.845 "path": "/tmp/tmp.mw1GF1imPu" 00:28:51.845 } 00:28:51.845 } 00:28:51.845 ] 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "subsystem": "iobuf", 00:28:51.845 "config": [ 00:28:51.845 { 00:28:51.845 "method": "iobuf_set_options", 00:28:51.845 "params": { 00:28:51.845 "enable_numa": false, 00:28:51.845 "large_bufsize": 135168, 00:28:51.845 "large_pool_count": 1024, 00:28:51.845 "small_bufsize": 8192, 00:28:51.845 "small_pool_count": 8192 00:28:51.845 } 00:28:51.845 } 00:28:51.845 ] 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "subsystem": "sock", 00:28:51.845 "config": [ 00:28:51.845 { 00:28:51.845 "method": "sock_set_default_impl", 00:28:51.845 "params": { 00:28:51.845 "impl_name": "posix" 00:28:51.845 } 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "method": "sock_impl_set_options", 00:28:51.845 "params": { 00:28:51.845 "enable_ktls": false, 00:28:51.845 "enable_placement_id": 0, 00:28:51.845 "enable_quickack": false, 00:28:51.845 "enable_recv_pipe": true, 00:28:51.845 "enable_zerocopy_send_client": false, 00:28:51.845 "enable_zerocopy_send_server": true, 00:28:51.845 "impl_name": "ssl", 00:28:51.845 "recv_buf_size": 4096, 00:28:51.845 "send_buf_size": 4096, 00:28:51.845 "tls_version": 0, 00:28:51.845 "zerocopy_threshold": 0 00:28:51.845 } 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "method": "sock_impl_set_options", 00:28:51.845 "params": { 00:28:51.845 "enable_ktls": false, 00:28:51.845 "enable_placement_id": 0, 00:28:51.845 "enable_quickack": false, 00:28:51.845 "enable_recv_pipe": true, 00:28:51.845 "enable_zerocopy_send_client": false, 00:28:51.845 "enable_zerocopy_send_server": true, 00:28:51.845 "impl_name": "posix", 00:28:51.845 "recv_buf_size": 2097152, 00:28:51.845 "send_buf_size": 2097152, 00:28:51.845 "tls_version": 0, 00:28:51.845 "zerocopy_threshold": 0 00:28:51.845 } 00:28:51.845 } 00:28:51.845 ] 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "subsystem": "vmd", 00:28:51.845 "config": [] 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "subsystem": "accel", 00:28:51.845 "config": [ 00:28:51.845 { 00:28:51.845 "method": "accel_set_options", 00:28:51.845 "params": { 00:28:51.845 "buf_count": 2048, 00:28:51.845 "large_cache_size": 16, 00:28:51.845 "sequence_count": 2048, 00:28:51.845 "small_cache_size": 128, 00:28:51.845 "task_count": 2048 00:28:51.845 } 00:28:51.845 } 00:28:51.845 ] 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "subsystem": "bdev", 00:28:51.845 "config": [ 00:28:51.845 { 00:28:51.845 "method": "bdev_set_options", 00:28:51.845 "params": { 00:28:51.845 "bdev_auto_examine": true, 00:28:51.845 "bdev_io_cache_size": 256, 00:28:51.845 "bdev_io_pool_size": 65535, 00:28:51.845 "iobuf_large_cache_size": 16, 00:28:51.845 "iobuf_small_cache_size": 128 00:28:51.845 } 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "method": "bdev_raid_set_options", 00:28:51.845 "params": { 00:28:51.845 "process_max_bandwidth_mb_sec": 0, 00:28:51.845 "process_window_size_kb": 1024 00:28:51.845 } 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "method": "bdev_iscsi_set_options", 00:28:51.845 "params": { 00:28:51.845 "timeout_sec": 30 00:28:51.845 } 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "method": "bdev_nvme_set_options", 00:28:51.845 "params": { 00:28:51.845 "action_on_timeout": "none", 00:28:51.845 "allow_accel_sequence": false, 00:28:51.845 "arbitration_burst": 0, 00:28:51.845 "bdev_retry_count": 3, 00:28:51.845 "ctrlr_loss_timeout_sec": 0, 00:28:51.845 "delay_cmd_submit": true, 00:28:51.845 "dhchap_dhgroups": [ 00:28:51.845 "null", 00:28:51.845 "ffdhe2048", 00:28:51.845 "ffdhe3072", 00:28:51.845 "ffdhe4096", 00:28:51.845 "ffdhe6144", 00:28:51.845 "ffdhe8192" 00:28:51.845 ], 00:28:51.845 "dhchap_digests": [ 00:28:51.845 "sha256", 00:28:51.845 "sha384", 00:28:51.845 "sha512" 00:28:51.845 ], 00:28:51.845 "disable_auto_failback": false, 00:28:51.845 "fast_io_fail_timeout_sec": 0, 00:28:51.845 "generate_uuids": false, 00:28:51.845 "high_priority_weight": 0, 00:28:51.845 "io_path_stat": false, 00:28:51.845 "io_queue_requests": 512, 00:28:51.845 "keep_alive_timeout_ms": 10000, 00:28:51.845 "low_priority_weight": 0, 00:28:51.845 "medium_priority_weight": 0, 00:28:51.845 "nvme_adminq_poll_period_us": 10000, 00:28:51.845 "nvme_error_stat": false, 00:28:51.845 "nvme_ioq_poll_period_us": 0, 00:28:51.845 "rdma_cm_event_timeout_ms": 0, 00:28:51.845 "rdma_max_cq_size": 0, 00:28:51.845 "rdma_srq_size": 0, 00:28:51.845 "reconnect_delay_sec": 0, 00:28:51.845 "timeout_admin_us": 0, 00:28:51.845 "timeout_us": 0, 00:28:51.845 "transport_ack_timeout": 0, 00:28:51.845 "transport_retry_count": 4, 00:28:51.845 "transport_tos": 0 00:28:51.845 } 00:28:51.845 }, 00:28:51.845 { 00:28:51.845 "method": "bdev_nvme_attach_controller", 00:28:51.845 "params": { 00:28:51.845 "adrfam": "IPv4", 00:28:51.845 "ctrlr_loss_timeout_sec": 0, 00:28:51.845 "ddgst": false, 00:28:51.845 "fast_io_fail_timeout_sec": 0, 00:28:51.845 "hdgst": false, 00:28:51.845 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:51.845 "multipath": "multipath", 00:28:51.845 "name": "nvme0", 00:28:51.845 "prchk_guard": false, 00:28:51.845 "prchk_reftag": false, 00:28:51.845 "psk": "key0", 00:28:51.845 "reconnect_delay_sec": 0, 00:28:51.845 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:51.845 "traddr": "127.0.0.1", 00:28:51.845 "trsvcid": "4420", 00:28:51.845 "trtype": "TCP" 00:28:51.845 } 00:28:51.845 }, 00:28:51.846 { 00:28:51.846 "method": "bdev_nvme_set_hotplug", 00:28:51.846 "params": { 00:28:51.846 "enable": false, 00:28:51.846 "period_us": 100000 00:28:51.846 } 00:28:51.846 }, 00:28:51.846 { 00:28:51.846 "method": "bdev_wait_for_examine" 00:28:51.846 } 00:28:51.846 ] 00:28:51.846 }, 00:28:51.846 { 00:28:51.846 "subsystem": "nbd", 00:28:51.846 "config": [] 00:28:51.846 } 00:28:51.846 ] 00:28:51.846 }' 00:28:51.846 17:28:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:51.846 [2024-12-06 17:28:34.012221] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:28:51.846 [2024-12-06 17:28:34.012350] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110646 ] 00:28:52.104 [2024-12-06 17:28:34.164855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.104 [2024-12-06 17:28:34.204192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:52.105 [2024-12-06 17:28:34.354975] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:53.038 17:28:35 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.038 17:28:35 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:53.038 17:28:35 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:28:53.038 17:28:35 keyring_file -- keyring/file.sh@121 -- # jq length 00:28:53.038 17:28:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.295 17:28:35 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:53.295 17:28:35 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:28:53.295 17:28:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:53.295 17:28:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.295 17:28:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.295 17:28:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.295 17:28:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:53.861 17:28:35 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:28:53.861 17:28:35 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:28:53.861 17:28:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:53.861 17:28:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.861 17:28:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:53.861 17:28:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.861 17:28:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:54.120 17:28:36 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:28:54.120 17:28:36 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:28:54.120 17:28:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:54.120 17:28:36 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:28:54.379 17:28:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:28:54.379 17:28:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:54.379 17:28:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.SR1MVypobR /tmp/tmp.mw1GF1imPu 00:28:54.379 17:28:36 keyring_file -- keyring/file.sh@20 -- # killprocess 110646 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110646 ']' 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110646 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110646 00:28:54.379 killing process with pid 110646 00:28:54.379 Received shutdown signal, test time was about 1.000000 seconds 00:28:54.379 00:28:54.379 Latency(us) 00:28:54.379 [2024-12-06T17:28:36.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.379 [2024-12-06T17:28:36.677Z] =================================================================================================================== 00:28:54.379 [2024-12-06T17:28:36.677Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110646' 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@973 -- # kill 110646 00:28:54.379 17:28:36 keyring_file -- common/autotest_common.sh@978 -- # wait 110646 00:28:54.637 17:28:36 keyring_file -- keyring/file.sh@21 -- # killprocess 110136 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110136 ']' 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110136 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110136 00:28:54.637 killing process with pid 110136 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110136' 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@973 -- # kill 110136 00:28:54.637 17:28:36 keyring_file -- common/autotest_common.sh@978 -- # wait 110136 00:28:54.895 00:28:54.895 real 0m18.013s 00:28:54.895 user 0m47.268s 00:28:54.895 sys 0m3.159s 00:28:54.895 17:28:37 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.895 ************************************ 00:28:54.895 END TEST keyring_file 00:28:54.895 ************************************ 00:28:54.895 17:28:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:54.895 17:28:37 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:28:54.895 17:28:37 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:54.895 17:28:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:54.895 17:28:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.895 17:28:37 -- common/autotest_common.sh@10 -- # set +x 00:28:54.895 ************************************ 00:28:54.895 START TEST keyring_linux 00:28:54.895 ************************************ 00:28:54.895 17:28:37 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:54.895 Joined session keyring: 872001673 00:28:54.895 * Looking for test storage... 00:28:54.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:54.895 17:28:37 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:54.895 17:28:37 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:28:54.895 17:28:37 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:55.155 17:28:37 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@345 -- # : 1 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:55.155 17:28:37 keyring_linux -- scripts/common.sh@368 -- # return 0 00:28:55.155 17:28:37 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:55.155 17:28:37 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:55.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.155 --rc genhtml_branch_coverage=1 00:28:55.155 --rc genhtml_function_coverage=1 00:28:55.155 --rc genhtml_legend=1 00:28:55.155 --rc geninfo_all_blocks=1 00:28:55.155 --rc geninfo_unexecuted_blocks=1 00:28:55.155 00:28:55.155 ' 00:28:55.155 17:28:37 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:55.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.155 --rc genhtml_branch_coverage=1 00:28:55.155 --rc genhtml_function_coverage=1 00:28:55.155 --rc genhtml_legend=1 00:28:55.155 --rc geninfo_all_blocks=1 00:28:55.155 --rc geninfo_unexecuted_blocks=1 00:28:55.155 00:28:55.155 ' 00:28:55.155 17:28:37 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:55.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.156 --rc genhtml_branch_coverage=1 00:28:55.156 --rc genhtml_function_coverage=1 00:28:55.156 --rc genhtml_legend=1 00:28:55.156 --rc geninfo_all_blocks=1 00:28:55.156 --rc geninfo_unexecuted_blocks=1 00:28:55.156 00:28:55.156 ' 00:28:55.156 17:28:37 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:55.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:55.156 --rc genhtml_branch_coverage=1 00:28:55.156 --rc genhtml_function_coverage=1 00:28:55.156 --rc genhtml_legend=1 00:28:55.156 --rc geninfo_all_blocks=1 00:28:55.156 --rc geninfo_unexecuted_blocks=1 00:28:55.156 00:28:55.156 ' 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=f9d2cf8c-971b-45c3-8a07-ea00e9eb6f56 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:55.156 17:28:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:28:55.156 17:28:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:55.156 17:28:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:55.156 17:28:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:55.156 17:28:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.156 17:28:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.156 17:28:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.156 17:28:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:55.156 17:28:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:55.156 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:55.156 /tmp/:spdk-test:key0 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:28:55.156 17:28:37 keyring_linux -- nvmf/common.sh@733 -- # python - 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:55.156 /tmp/:spdk-test:key1 00:28:55.156 17:28:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=110814 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.156 17:28:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 110814 00:28:55.156 17:28:37 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110814 ']' 00:28:55.156 17:28:37 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.156 17:28:37 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.156 17:28:37 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.156 17:28:37 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.156 17:28:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:55.415 [2024-12-06 17:28:37.454606] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:28:55.415 [2024-12-06 17:28:37.454711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110814 ] 00:28:55.415 [2024-12-06 17:28:37.614031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.415 [2024-12-06 17:28:37.654067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:55.675 17:28:37 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:55.675 [2024-12-06 17:28:37.850212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:55.675 null0 00:28:55.675 [2024-12-06 17:28:37.882182] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:55.675 [2024-12-06 17:28:37.882410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.675 17:28:37 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:55.675 767351386 00:28:55.675 17:28:37 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:55.675 699017172 00:28:55.675 17:28:37 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=110832 00:28:55.675 17:28:37 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:55.675 17:28:37 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 110832 /var/tmp/bperf.sock 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110832 ']' 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:55.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:55.675 17:28:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:55.935 [2024-12-06 17:28:37.983437] Starting SPDK v25.01-pre git sha1 c7acbd6be / DPDK 24.03.0 initialization... 00:28:55.935 [2024-12-06 17:28:37.983578] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110832 ] 00:28:55.935 [2024-12-06 17:28:38.134808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.935 [2024-12-06 17:28:38.169986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.194 17:28:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.194 17:28:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:28:56.194 17:28:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:56.194 17:28:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:56.453 17:28:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:56.453 17:28:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:56.711 17:28:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:56.711 17:28:38 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:56.970 [2024-12-06 17:28:39.195231] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:57.228 nvme0n1 00:28:57.228 17:28:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:57.228 17:28:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:57.228 17:28:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:57.228 17:28:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:57.228 17:28:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.228 17:28:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:57.487 17:28:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:57.487 17:28:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:57.487 17:28:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:57.487 17:28:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:57.487 17:28:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:57.487 17:28:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:57.487 17:28:39 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.746 17:28:39 keyring_linux -- keyring/linux.sh@25 -- # sn=767351386 00:28:57.746 17:28:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:57.746 17:28:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:57.746 17:28:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 767351386 == \7\6\7\3\5\1\3\8\6 ]] 00:28:57.746 17:28:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 767351386 00:28:57.746 17:28:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:57.746 17:28:39 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.005 Running I/O for 1 seconds... 00:28:58.943 10666.00 IOPS, 41.66 MiB/s 00:28:58.943 Latency(us) 00:28:58.943 [2024-12-06T17:28:41.241Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.943 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:58.943 nvme0n1 : 1.01 10668.38 41.67 0.00 0.00 11921.98 8221.79 18588.39 00:28:58.943 [2024-12-06T17:28:41.241Z] =================================================================================================================== 00:28:58.943 [2024-12-06T17:28:41.241Z] Total : 10668.38 41.67 0.00 0.00 11921.98 8221.79 18588.39 00:28:58.943 { 00:28:58.943 "results": [ 00:28:58.943 { 00:28:58.943 "job": "nvme0n1", 00:28:58.943 "core_mask": "0x2", 00:28:58.943 "workload": "randread", 00:28:58.943 "status": "finished", 00:28:58.943 "queue_depth": 128, 00:28:58.943 "io_size": 4096, 00:28:58.943 "runtime": 1.011775, 00:28:58.943 "iops": 10668.379827530825, 00:28:58.943 "mibps": 41.67335870129229, 00:28:58.943 "io_failed": 0, 00:28:58.943 "io_timeout": 0, 00:28:58.943 "avg_latency_us": 11921.983863088923, 00:28:58.943 "min_latency_us": 8221.789090909091, 00:28:58.943 "max_latency_us": 18588.392727272727 00:28:58.943 } 00:28:58.943 ], 00:28:58.943 "core_count": 1 00:28:58.943 } 00:28:58.943 17:28:41 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:58.943 17:28:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:59.201 17:28:41 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:59.201 17:28:41 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:59.202 17:28:41 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:59.202 17:28:41 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:59.202 17:28:41 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:59.202 17:28:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.770 17:28:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:59.770 17:28:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:59.770 17:28:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:59.770 17:28:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:59.770 17:28:41 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:28:59.770 17:28:41 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:59.770 17:28:41 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:59.770 17:28:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.770 17:28:41 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:59.770 17:28:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.770 17:28:41 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:59.770 17:28:41 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:00.029 [2024-12-06 17:28:42.166362] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:00.029 [2024-12-06 17:28:42.166630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14731f0 (107): Transport endpoint is not connected 00:29:00.029 [2024-12-06 17:28:42.167619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14731f0 (9): Bad file descriptor 00:29:00.029 [2024-12-06 17:28:42.168615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:00.029 [2024-12-06 17:28:42.168641] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:00.029 [2024-12-06 17:28:42.168654] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:00.029 [2024-12-06 17:28:42.168665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:00.029 2024/12/06 17:28:42 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:29:00.029 request: 00:29:00.029 { 00:29:00.029 "method": "bdev_nvme_attach_controller", 00:29:00.029 "params": { 00:29:00.029 "name": "nvme0", 00:29:00.029 "trtype": "tcp", 00:29:00.029 "traddr": "127.0.0.1", 00:29:00.029 "adrfam": "ipv4", 00:29:00.029 "trsvcid": "4420", 00:29:00.029 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.029 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:00.029 "prchk_reftag": false, 00:29:00.029 "prchk_guard": false, 00:29:00.029 "hdgst": false, 00:29:00.029 "ddgst": false, 00:29:00.029 "psk": ":spdk-test:key1", 00:29:00.029 "allow_unrecognized_csi": false 00:29:00.029 } 00:29:00.029 } 00:29:00.029 Got JSON-RPC error response 00:29:00.029 GoRPCClient: error on JSON-RPC call 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@33 -- # sn=767351386 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 767351386 00:29:00.029 1 links removed 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@33 -- # sn=699017172 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 699017172 00:29:00.029 1 links removed 00:29:00.029 17:28:42 keyring_linux -- keyring/linux.sh@41 -- # killprocess 110832 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110832 ']' 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110832 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110832 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:00.029 killing process with pid 110832 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110832' 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 110832 00:29:00.029 Received shutdown signal, test time was about 1.000000 seconds 00:29:00.029 00:29:00.029 Latency(us) 00:29:00.029 [2024-12-06T17:28:42.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.029 [2024-12-06T17:28:42.327Z] =================================================================================================================== 00:29:00.029 [2024-12-06T17:28:42.327Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.029 17:28:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 110832 00:29:00.287 17:28:42 keyring_linux -- keyring/linux.sh@42 -- # killprocess 110814 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110814 ']' 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110814 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110814 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110814' 00:29:00.287 killing process with pid 110814 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@973 -- # kill 110814 00:29:00.287 17:28:42 keyring_linux -- common/autotest_common.sh@978 -- # wait 110814 00:29:00.546 00:29:00.546 real 0m5.591s 00:29:00.546 user 0m11.866s 00:29:00.546 sys 0m1.396s 00:29:00.546 17:28:42 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.546 17:28:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:00.546 ************************************ 00:29:00.546 END TEST keyring_linux 00:29:00.546 ************************************ 00:29:00.546 17:28:42 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:00.546 17:28:42 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:00.546 17:28:42 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:00.546 17:28:42 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:00.546 17:28:42 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:00.546 17:28:42 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:00.546 17:28:42 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:00.546 17:28:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.546 17:28:42 -- common/autotest_common.sh@10 -- # set +x 00:29:00.546 17:28:42 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:00.546 17:28:42 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:00.546 17:28:42 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:00.546 17:28:42 -- common/autotest_common.sh@10 -- # set +x 00:29:02.480 INFO: APP EXITING 00:29:02.480 INFO: killing all VMs 00:29:02.480 INFO: killing vhost app 00:29:02.480 INFO: EXIT DONE 00:29:03.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:03.044 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:03.044 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:03.610 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:03.610 Cleaning 00:29:03.610 Removing: /var/run/dpdk/spdk0/config 00:29:03.610 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:03.610 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:03.610 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:03.610 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:03.610 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:03.610 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:03.610 Removing: /var/run/dpdk/spdk1/config 00:29:03.610 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:03.610 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:03.610 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:03.610 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:03.610 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:03.868 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:03.868 Removing: /var/run/dpdk/spdk2/config 00:29:03.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:03.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:03.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:03.868 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:03.868 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:03.868 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:03.868 Removing: /var/run/dpdk/spdk3/config 00:29:03.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:03.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:03.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:03.868 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:03.868 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:03.868 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:03.868 Removing: /var/run/dpdk/spdk4/config 00:29:03.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:03.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:03.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:03.868 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:03.868 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:03.868 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:03.868 Removing: /dev/shm/nvmf_trace.0 00:29:03.868 Removing: /dev/shm/spdk_tgt_trace.pid58612 00:29:03.868 Removing: /var/run/dpdk/spdk0 00:29:03.868 Removing: /var/run/dpdk/spdk1 00:29:03.868 Removing: /var/run/dpdk/spdk2 00:29:03.868 Removing: /var/run/dpdk/spdk3 00:29:03.868 Removing: /var/run/dpdk/spdk4 00:29:03.868 Removing: /var/run/dpdk/spdk_pid100705 00:29:03.868 Removing: /var/run/dpdk/spdk_pid100751 00:29:03.868 Removing: /var/run/dpdk/spdk_pid101094 00:29:03.868 Removing: /var/run/dpdk/spdk_pid101140 00:29:03.868 Removing: /var/run/dpdk/spdk_pid101545 00:29:03.868 Removing: /var/run/dpdk/spdk_pid102102 00:29:03.868 Removing: /var/run/dpdk/spdk_pid102538 00:29:03.868 Removing: /var/run/dpdk/spdk_pid103525 00:29:03.868 Removing: /var/run/dpdk/spdk_pid104548 00:29:03.868 Removing: /var/run/dpdk/spdk_pid104661 00:29:03.868 Removing: /var/run/dpdk/spdk_pid104718 00:29:03.868 Removing: /var/run/dpdk/spdk_pid106307 00:29:03.868 Removing: /var/run/dpdk/spdk_pid106615 00:29:03.868 Removing: /var/run/dpdk/spdk_pid106945 00:29:03.868 Removing: /var/run/dpdk/spdk_pid107503 00:29:03.868 Removing: /var/run/dpdk/spdk_pid107508 00:29:03.868 Removing: /var/run/dpdk/spdk_pid107901 00:29:03.868 Removing: /var/run/dpdk/spdk_pid108060 00:29:03.868 Removing: /var/run/dpdk/spdk_pid108215 00:29:03.868 Removing: /var/run/dpdk/spdk_pid108307 00:29:03.868 Removing: /var/run/dpdk/spdk_pid108462 00:29:03.868 Removing: /var/run/dpdk/spdk_pid108566 00:29:03.868 Removing: /var/run/dpdk/spdk_pid109284 00:29:03.868 Removing: /var/run/dpdk/spdk_pid109314 00:29:03.868 Removing: /var/run/dpdk/spdk_pid109348 00:29:03.868 Removing: /var/run/dpdk/spdk_pid109599 00:29:03.868 Removing: /var/run/dpdk/spdk_pid109633 00:29:03.868 Removing: /var/run/dpdk/spdk_pid109666 00:29:03.868 Removing: /var/run/dpdk/spdk_pid110136 00:29:03.868 Removing: /var/run/dpdk/spdk_pid110154 00:29:03.868 Removing: /var/run/dpdk/spdk_pid110646 00:29:03.868 Removing: /var/run/dpdk/spdk_pid110814 00:29:03.868 Removing: /var/run/dpdk/spdk_pid110832 00:29:03.868 Removing: /var/run/dpdk/spdk_pid58459 00:29:03.868 Removing: /var/run/dpdk/spdk_pid58612 00:29:03.868 Removing: /var/run/dpdk/spdk_pid58862 00:29:03.868 Removing: /var/run/dpdk/spdk_pid58955 00:29:03.868 Removing: /var/run/dpdk/spdk_pid58981 00:29:03.868 Removing: /var/run/dpdk/spdk_pid59085 00:29:03.868 Removing: /var/run/dpdk/spdk_pid59115 00:29:03.868 Removing: /var/run/dpdk/spdk_pid59254 00:29:03.868 Removing: /var/run/dpdk/spdk_pid59540 00:29:03.868 Removing: /var/run/dpdk/spdk_pid59719 00:29:03.868 Removing: /var/run/dpdk/spdk_pid59809 00:29:03.868 Removing: /var/run/dpdk/spdk_pid59909 00:29:03.868 Removing: /var/run/dpdk/spdk_pid59993 00:29:03.868 Removing: /var/run/dpdk/spdk_pid60026 00:29:03.868 Removing: /var/run/dpdk/spdk_pid60061 00:29:03.868 Removing: /var/run/dpdk/spdk_pid60131 00:29:03.868 Removing: /var/run/dpdk/spdk_pid60243 00:29:03.868 Removing: /var/run/dpdk/spdk_pid60886 00:29:03.868 Removing: /var/run/dpdk/spdk_pid60931 00:29:03.868 Removing: /var/run/dpdk/spdk_pid60981 00:29:03.868 Removing: /var/run/dpdk/spdk_pid61001 00:29:03.868 Removing: /var/run/dpdk/spdk_pid61075 00:29:03.868 Removing: /var/run/dpdk/spdk_pid61089 00:29:03.868 Removing: /var/run/dpdk/spdk_pid61164 00:29:03.868 Removing: /var/run/dpdk/spdk_pid61177 00:29:03.868 Removing: /var/run/dpdk/spdk_pid61233 00:29:03.868 Removing: /var/run/dpdk/spdk_pid61245 00:29:03.868 Removing: /var/run/dpdk/spdk_pid61297 00:29:04.126 Removing: /var/run/dpdk/spdk_pid61313 00:29:04.126 Removing: /var/run/dpdk/spdk_pid61454 00:29:04.126 Removing: /var/run/dpdk/spdk_pid61484 00:29:04.126 Removing: /var/run/dpdk/spdk_pid61567 00:29:04.126 Removing: /var/run/dpdk/spdk_pid62035 00:29:04.126 Removing: /var/run/dpdk/spdk_pid62398 00:29:04.126 Removing: /var/run/dpdk/spdk_pid64913 00:29:04.126 Removing: /var/run/dpdk/spdk_pid64964 00:29:04.126 Removing: /var/run/dpdk/spdk_pid65306 00:29:04.126 Removing: /var/run/dpdk/spdk_pid65347 00:29:04.126 Removing: /var/run/dpdk/spdk_pid65757 00:29:04.126 Removing: /var/run/dpdk/spdk_pid66340 00:29:04.126 Removing: /var/run/dpdk/spdk_pid66783 00:29:04.126 Removing: /var/run/dpdk/spdk_pid67789 00:29:04.126 Removing: /var/run/dpdk/spdk_pid68848 00:29:04.126 Removing: /var/run/dpdk/spdk_pid68961 00:29:04.126 Removing: /var/run/dpdk/spdk_pid69033 00:29:04.126 Removing: /var/run/dpdk/spdk_pid70637 00:29:04.126 Removing: /var/run/dpdk/spdk_pid70987 00:29:04.126 Removing: /var/run/dpdk/spdk_pid74811 00:29:04.126 Removing: /var/run/dpdk/spdk_pid75213 00:29:04.126 Removing: /var/run/dpdk/spdk_pid75836 00:29:04.126 Removing: /var/run/dpdk/spdk_pid76381 00:29:04.126 Removing: /var/run/dpdk/spdk_pid82301 00:29:04.126 Removing: /var/run/dpdk/spdk_pid82797 00:29:04.126 Removing: /var/run/dpdk/spdk_pid82906 00:29:04.126 Removing: /var/run/dpdk/spdk_pid83065 00:29:04.126 Removing: /var/run/dpdk/spdk_pid83104 00:29:04.126 Removing: /var/run/dpdk/spdk_pid83143 00:29:04.126 Removing: /var/run/dpdk/spdk_pid83182 00:29:04.126 Removing: /var/run/dpdk/spdk_pid83337 00:29:04.126 Removing: /var/run/dpdk/spdk_pid83479 00:29:04.126 Removing: /var/run/dpdk/spdk_pid83740 00:29:04.126 Removing: /var/run/dpdk/spdk_pid83856 00:29:04.126 Removing: /var/run/dpdk/spdk_pid84107 00:29:04.126 Removing: /var/run/dpdk/spdk_pid84205 00:29:04.126 Removing: /var/run/dpdk/spdk_pid84327 00:29:04.126 Removing: /var/run/dpdk/spdk_pid84691 00:29:04.126 Removing: /var/run/dpdk/spdk_pid85119 00:29:04.126 Removing: /var/run/dpdk/spdk_pid85120 00:29:04.126 Removing: /var/run/dpdk/spdk_pid85121 00:29:04.126 Removing: /var/run/dpdk/spdk_pid85413 00:29:04.126 Removing: /var/run/dpdk/spdk_pid85678 00:29:04.126 Removing: /var/run/dpdk/spdk_pid86089 00:29:04.126 Removing: /var/run/dpdk/spdk_pid86403 00:29:04.126 Removing: /var/run/dpdk/spdk_pid86975 00:29:04.126 Removing: /var/run/dpdk/spdk_pid86982 00:29:04.126 Removing: /var/run/dpdk/spdk_pid87364 00:29:04.126 Removing: /var/run/dpdk/spdk_pid87385 00:29:04.126 Removing: /var/run/dpdk/spdk_pid87399 00:29:04.126 Removing: /var/run/dpdk/spdk_pid87424 00:29:04.126 Removing: /var/run/dpdk/spdk_pid87435 00:29:04.126 Removing: /var/run/dpdk/spdk_pid87834 00:29:04.126 Removing: /var/run/dpdk/spdk_pid87883 00:29:04.126 Removing: /var/run/dpdk/spdk_pid88271 00:29:04.126 Removing: /var/run/dpdk/spdk_pid88507 00:29:04.126 Removing: /var/run/dpdk/spdk_pid89033 00:29:04.126 Removing: /var/run/dpdk/spdk_pid89638 00:29:04.126 Removing: /var/run/dpdk/spdk_pid91064 00:29:04.126 Removing: /var/run/dpdk/spdk_pid91704 00:29:04.126 Removing: /var/run/dpdk/spdk_pid91706 00:29:04.126 Removing: /var/run/dpdk/spdk_pid93766 00:29:04.126 Removing: /var/run/dpdk/spdk_pid93843 00:29:04.126 Removing: /var/run/dpdk/spdk_pid93928 00:29:04.126 Removing: /var/run/dpdk/spdk_pid94007 00:29:04.126 Removing: /var/run/dpdk/spdk_pid94137 00:29:04.126 Removing: /var/run/dpdk/spdk_pid94214 00:29:04.126 Removing: /var/run/dpdk/spdk_pid94303 00:29:04.126 Removing: /var/run/dpdk/spdk_pid94380 00:29:04.126 Removing: /var/run/dpdk/spdk_pid94743 00:29:04.126 Removing: /var/run/dpdk/spdk_pid95519 00:29:04.126 Removing: /var/run/dpdk/spdk_pid96912 00:29:04.126 Removing: /var/run/dpdk/spdk_pid97101 00:29:04.126 Removing: /var/run/dpdk/spdk_pid97391 00:29:04.126 Removing: /var/run/dpdk/spdk_pid97903 00:29:04.126 Removing: /var/run/dpdk/spdk_pid98258 00:29:04.126 Clean 00:29:04.126 17:28:46 -- common/autotest_common.sh@1453 -- # return 0 00:29:04.126 17:28:46 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:04.126 17:28:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.126 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:29:04.384 17:28:46 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:04.384 17:28:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:04.384 17:28:46 -- common/autotest_common.sh@10 -- # set +x 00:29:04.384 17:28:46 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:04.384 17:28:46 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:04.384 17:28:46 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:04.384 17:28:46 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:04.384 17:28:46 -- spdk/autotest.sh@398 -- # hostname 00:29:04.384 17:28:46 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:04.641 geninfo: WARNING: invalid characters removed from testname! 00:29:36.711 17:29:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:36.971 17:29:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:40.253 17:29:22 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:42.839 17:29:24 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:46.119 17:29:27 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:48.645 17:29:30 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:51.930 17:29:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:51.930 17:29:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:51.930 17:29:34 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:51.930 17:29:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:51.930 17:29:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:51.930 17:29:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:51.930 + [[ -n 5267 ]] 00:29:51.930 + sudo kill 5267 00:29:51.941 [Pipeline] } 00:29:51.960 [Pipeline] // timeout 00:29:51.966 [Pipeline] } 00:29:51.982 [Pipeline] // stage 00:29:51.988 [Pipeline] } 00:29:52.005 [Pipeline] // catchError 00:29:52.015 [Pipeline] stage 00:29:52.018 [Pipeline] { (Stop VM) 00:29:52.032 [Pipeline] sh 00:29:52.311 + vagrant halt 00:29:56.502 ==> default: Halting domain... 00:30:01.778 [Pipeline] sh 00:30:02.055 + vagrant destroy -f 00:30:06.237 ==> default: Removing domain... 00:30:06.247 [Pipeline] sh 00:30:06.523 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:30:06.532 [Pipeline] } 00:30:06.548 [Pipeline] // stage 00:30:06.555 [Pipeline] } 00:30:06.567 [Pipeline] // dir 00:30:06.572 [Pipeline] } 00:30:06.582 [Pipeline] // wrap 00:30:06.587 [Pipeline] } 00:30:06.596 [Pipeline] // catchError 00:30:06.618 [Pipeline] stage 00:30:06.620 [Pipeline] { (Epilogue) 00:30:06.632 [Pipeline] sh 00:30:06.921 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:15.043 [Pipeline] catchError 00:30:15.045 [Pipeline] { 00:30:15.059 [Pipeline] sh 00:30:15.358 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:16.748 Artifacts sizes are good 00:30:16.757 [Pipeline] } 00:30:16.770 [Pipeline] // catchError 00:30:16.781 [Pipeline] archiveArtifacts 00:30:16.788 Archiving artifacts 00:30:16.905 [Pipeline] cleanWs 00:30:16.916 [WS-CLEANUP] Deleting project workspace... 00:30:16.916 [WS-CLEANUP] Deferred wipeout is used... 00:30:16.922 [WS-CLEANUP] done 00:30:16.924 [Pipeline] } 00:30:16.939 [Pipeline] // stage 00:30:16.944 [Pipeline] } 00:30:16.957 [Pipeline] // node 00:30:16.962 [Pipeline] End of Pipeline 00:30:16.994 Finished: SUCCESS